1. 12 1月, 2006 1 次提交
  2. 10 1月, 2006 1 次提交
  3. 09 1月, 2006 1 次提交
  4. 14 11月, 2005 2 次提交
  5. 10 11月, 2005 1 次提交
  6. 09 11月, 2005 7 次提交
    • N
      [PATCH] sched: resched and cpu_idle rework · 64c7c8f8
      Nick Piggin 提交于
      Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
      confusion, and make their semantics rigid.  Improves efficiency of
      resched_task and some cpu_idle routines.
      
      * In resched_task:
      - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
        and as we hold it during resched_task, then there is no need for an
        atomic test and set there. The only other time this should be set is
        when the task's quantum expires, in the timer interrupt - this is
        protected against because the rq lock is irq-safe.
      
      - If TIF_NEED_RESCHED is set, then we don't need to do anything. It
        won't get unset until the task get's schedule()d off.
      
      - If we are running on the same CPU as the task we resched, then set
        TIF_NEED_RESCHED and no further action is required.
      
      - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
        after TIF_NEED_RESCHED has been set, then we need to send an IPI.
      
      Using these rules, we are able to remove the test and set operation in
      resched_task, and make clear the previously vague semantics of
      POLLING_NRFLAG.
      
      * In idle routines:
      - Enter cpu_idle with preempt disabled. When the need_resched() condition
        becomes true, explicitly call schedule(). This makes things a bit clearer
        (IMO), but haven't updated all architectures yet.
      
      - Many do a test and clear of TIF_NEED_RESCHED for some reason. According
        to the resched_task rules, this isn't needed (and actually breaks the
        assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
        held). So remove that. Generally one less locked memory op when switching
        to the idle thread.
      
      - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
        most polling idle loops. The above resched_task semantics allow it to be
        set until before the last time need_resched() is checked before going into
        a halt requiring interrupt wakeup.
      
        Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
        can be always left set, completely eliminating resched IPIs when rescheduling
        the idle task.
      
        POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Con Kolivas <kernel@kolivas.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      64c7c8f8
    • C
      [PATCH] sched: consider migration thread with smp nice · ede3d0fb
      Con Kolivas 提交于
      The intermittent scheduling of the migration thread at ultra high priority
      makes the smp nice handling see that runqueue as being heavily loaded.  The
      migration thread itself actually handles the balancing so its influence on
      priority balancing should be ignored.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ede3d0fb
    • C
      [PATCH] sched: correct smp_nice_bias · 6dd4a85b
      Con Kolivas 提交于
      The priority biasing was off by mutliplying the total load by the total
      priority bias and this ruins the ratio of loads between runqueues. This
      patch should correct the ratios of loads between runqueues to be proportional
      to overall load. -2nd attempt.
      
      From: Dave Kleikamp <shaggy@austin.ibm.com>
      
        This patch fixes a divide-by-zero error that I hit on a two-way i386
        machine.  rq->nr_running is tested to be non-zero, but may change by the
        time it is used in the division.  Saving the value to a local variable
        ensures that the same value that is checked is used in the division.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Signed-off-by: NDave Kleikamp <shaggy@austin.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6dd4a85b
    • C
      [PATCH] sched: smp nice bias busy queues on idle rebalance · 3b0bd9bc
      Con Kolivas 提交于
      To intensify the 'nice' support across physical cpus on SMP we can bias the
      loads on idle rebalancing. To prevent idle rebalance from trying to pull tasks
      from queues that appear heavily loaded we only bias the load if there is more
      than one task running.
      
      Add some minor micro-optimisations and have only one return from __source_load
      and __target_load functions.
      
      Fix the fact that target_load was not biased by priority when type == 0.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3b0bd9bc
    • C
      [PATCH] sched: account rt tasks in prio_bias() · dad1c65c
      Con Kolivas 提交于
      Real time tasks' effect on prio_bias should be based on their real time
      priority level instead of their static_prio which is based on nice.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      dad1c65c
    • C
      [PATCH] sched: change prio bias only if queued · 738a2ccb
      Con Kolivas 提交于
      prio_bias should only be adjusted in set_user_nice if p is actually currently
      queued.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      738a2ccb
    • C
      [PATCH] sched: implement nice support across physical cpus on SMP · b910472d
      Con Kolivas 提交于
      This patch implements 'nice' support across physical cpus on SMP.
      
      It introduces an extra runqueue variable prio_bias which is the sum of the
      (inverted) static priorities of all the tasks on the runqueue.
      
      This is then used to bias busy rebalancing between runqueues to obtain good
      distribution of tasks of different nice values.  By biasing the balancing only
      during busy rebalancing we can avoid having any significant loss of throughput
      by not affecting the carefully tuned idle balancing already in place.  If all
      tasks are running at the same nice level this code should also have minimal
      effect.  The code is optimised out in the !CONFIG_SMP case.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b910472d
  7. 07 11月, 2005 2 次提交
  8. 05 11月, 2005 1 次提交
  9. 31 10月, 2005 1 次提交
  10. 30 10月, 2005 1 次提交
    • H
      [PATCH] mm: update_hiwaters just in time · 365e9c87
      Hugh Dickins 提交于
      update_mem_hiwater has attracted various criticisms, in particular from those
      concerned with mm scalability.  Originally it was called whenever rss or
      total_vm got raised.  Then many of those callsites were replaced by a timer
      tick call from account_system_time.  Now Frank van Maarseveen reports that to
      be found inadequate.  How about this?  Works for Frank.
      
      Replace update_mem_hiwater, a poor combination of two unrelated ops, by macros
      update_hiwater_rss and update_hiwater_vm.  Don't attempt to keep
      mm->hiwater_rss up to date at timer tick, nor every time we raise rss (usually
      by 1): those are hot paths.  Do the opposite, update only when about to lower
      rss (usually by many), or just before final accounting in do_exit.  Handle
      mm->hiwater_vm in the same way, though it's much less of an issue.  Demand
      that whoever collects these hiwater statistics do the work of taking the
      maximum with rss or total_vm.
      
      And there has been no collector of these hiwater statistics in the tree.  The
      new convention needs an example, so match Frank's usage by adding a VmPeak
      line above VmSize to /proc/<pid>/status, and also a VmHWM line above VmRSS
      (High-Water-Mark or High-Water-Memory).
      
      There was a particular anomaly during mremap move, that hiwater_vm might be
      captured too high.  A fleeting such anomaly remains, but it's quickly
      corrected now, whereas before it would stick.
      
      What locking?  None: if the app is racy then these statistics will be racy,
      it's not worth any overhead to make them exact.  But whenever it suits,
      hiwater_vm is updated under exclusive mmap_sem, and hiwater_rss under
      page_table_lock (for now) or with preemption disabled (later on): without
      going to any trouble, minimize the time between reading current values and
      updating, to minimize those occasions when a racing thread bumps a count up
      and back down in between.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      365e9c87
  11. 27 10月, 2005 1 次提交
  12. 14 9月, 2005 1 次提交
  13. 12 9月, 2005 2 次提交
  14. 11 9月, 2005 12 次提交
    • S
      [PATCH] sched: allow the load to grow upto its cpu_power · 0c117f1b
      Siddha, Suresh B 提交于
      Don't pull tasks from a group if that would cause the group's total load to
      drop below its total cpu_power (ie.  cause the group to start going idle).
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0c117f1b
    • S
      [PATCH] sched: don't kick ALB in the presence of pinned task · fa3b6ddc
      Siddha, Suresh B 提交于
      Jack Steiner brought this issue at my OLS talk.
      
      Take a scenario where two tasks are pinned to two HT threads in a physical
      package.  Idle packages in the system will keep kicking migration_thread on
      the busy package with out any success.
      
      We will run into similar scenarios in the presence of CMP/NUMA.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fa3b6ddc
    • R
      [PATCH] sched: use cached variable in sys_sched_yield() · 5927ad78
      Renaud Lienhart 提交于
      In sys_sched_yield(), we cache current->array in the "array" variable, thus
      there's no need to dereference "current" again later.
      Signed-Off-By: NRenaud Lienhart <renaud.lienhart@free.fr>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5927ad78
    • N
      [PATCH] sched: HT optimisation · 5969fe06
      Nick Piggin 提交于
      If an idle sibling of an HT queue encounters a busy sibling, then make
      higher level load balancing of the non-idle variety.
      
      Performance of multiprocessor HT systems with low numbers of tasks
      (generally < number of virtual CPUs) can be significantly worse than the
      exact same workloads when running in non-HT mode.  The reason is largely
      due to poor scheduling behaviour.
      
      This patch improves the situation, making the performance gap far less
      significant on one problematic test case (tbench).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5969fe06
    • N
      [PATCH] sched: less locking · e17224bf
      Nick Piggin 提交于
      During periodic load balancing, don't hold this runqueue's lock while
      scanning remote runqueues, which can take a non trivial amount of time
      especially on very large systems.
      
      Holding the runqueue lock will only help to stabilise ->nr_running, however
      this doesn't do much to help because tasks being woken will simply get held
      up on the runqueue lock, so ->nr_running would not provide a really
      accurate picture of runqueue load in that case anyway.
      
      What's more, ->nr_running (and possibly the cpu_load averages) of remote
      runqueues won't be stable anyway, so load balancing is always an inexact
      operation.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e17224bf
    • N
      [PATCH] sched: less newidle locking · d6d5cfaf
      Nick Piggin 提交于
      Similarly to the earlier change in load_balance, only lock the runqueue in
      load_balance_newidle if the busiest queue found has a nr_running > 1.  This
      will reduce frequency of expensive remote runqueue lock aquisitions in the
      schedule() path on some workloads.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d6d5cfaf
    • I
      [PATCH] sched: fix SMT scheduler latency bug · 67f9a619
      Ingo Molnar 提交于
      William Weston reported unusually high scheduling latencies on his x86 HT
      box, on the -RT kernel.  I managed to reproduce it on my HT box and the
      latency tracer shows the incident in action:
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup (try_to_wake_up)
              ..............................................................
              ... we are running on CPU#3, PID 2778 gets woken to CPU#1: ...
              ..............................................................
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup <<...>-2778> (73 1)
            du-2803  3Dnh2    0us : _raw_spin_unlock (try_to_wake_up)
              ................................................
              ... still on CPU#3, we send an IPI to CPU#1: ...
              ................................................
            du-2803  3Dnh1    0us : resched_task (try_to_wake_up)
            du-2803  3Dnh1    1us : smp_send_reschedule (try_to_wake_up)
            du-2803  3Dnh1    1us : send_IPI_mask_bitmask (smp_send_reschedule)
            du-2803  3Dnh1    2us : _raw_spin_unlock_irqrestore (try_to_wake_up)
              ...............................................
              ... 1 usec later, the IPI arrives on CPU#1: ...
              ...............................................
        <idle>-0     1Dnh.    2us : smp_reschedule_interrupt (c0100c5a 0 0)
      
      So far so good, this is the normal wakeup/preemption mechanism.  But here
      comes the scheduler anomaly on CPU#1:
      
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    3us : __schedule (preempt_schedule_irq)
        <idle>-0     1Dnh.    3us : profile_hit (__schedule)
        <idle>-0     1Dnh1    3us : sched_clock (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irq (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irqsave (__schedule)
        <idle>-0     1Dnh2    5us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh1    5us : preempt_schedule (__schedule)
        <idle>-0     1Dnh1    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh2    6us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    8us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh2    8us : preempt_schedule (__schedule)
        <idle>-0     1Dnh2    8us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    9us : trace_stop_sched_switched (__schedule)
        <idle>-0     1Dnh2    9us : _raw_spin_lock (trace_stop_sched_switched)
        <idle>-0     1Dnh3   10us : trace_stop_sched_switched <<...>-2778> (73 8c)
        <idle>-0     1Dnh3   10us : _raw_spin_unlock (trace_stop_sched_switched)
        <idle>-0     1Dnh1   10us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh.   11us : local_irq_enable_noresched (preempt_schedule_irq)
        <idle>-0     1Dnh.   11us < (0)
      
      we didnt pick up pid 2778! It only gets scheduled much later:
      
         <...>-2778  1Dnh2  412us : __switch_to (__schedule)
         <...>-2778  1Dnh2  413us : __schedule <<idle>-0> (8c 73)
         <...>-2778  1Dnh2  413us : _raw_spin_unlock (__schedule)
         <...>-2778  1Dnh1  413us : trace_stop_sched_switched (__schedule)
         <...>-2778  1Dnh1  414us : _raw_spin_lock (trace_stop_sched_switched)
         <...>-2778  1Dnh2  414us : trace_stop_sched_switched <<...>-2778> (73 1)
         <...>-2778  1Dnh2  414us : _raw_spin_unlock (trace_stop_sched_switched)
         <...>-2778  1Dnh1  415us : trace_stop_sched_switched (__schedule)
      
      the reason for this anomaly is the following code in dependent_sleeper():
      
                      /*
                       * If a user task with lower static priority than the
                       * running task on the SMT sibling is trying to schedule,
                       * delay it till there is proportionately less timeslice
                       * left of the sibling task to prevent a lower priority
                       * task from using an unfair proportion of the
                       * physical cpu's resources. -ck
                       */
      [...]
                              if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
                                      100) > task_timeslice(p)))
                                              ret = 1;
      
      Note that in contrast to the comment above, we dont actually do the check
      based on static priority, we do the check based on timeslices.  But
      timeslices go up and down, and even highprio tasks can randomly have very
      low timeslices (just before their next refill) and can thus be judged as
      'lowprio' by the above piece of code.  This condition is clearly buggy.
      The correct test is to check for static_prio _and_ to check for the
      preemption priority.  Even on different static priority levels, a
      higher-prio interactive task should not be delayed due to a
      higher-static-prio CPU hog.
      
      There is a symmetric bug in the 'kick SMT sibling' code of this function as
      well, which can be solved in a similar way.
      
      The patch below (against the current scheduler queue in -mm) fixes both
      bugs.  I have build and boot-tested this on x86 SMT, and nice +20 tasks
      still get properly throttled - so the dependent-sleeper logic is still in
      action.
      
      btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
      property was applied too liberally, so this fix is likely a throughput
      improvement as well.
      
      I separated out a smt_slice() function to make the code easier to read.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      67f9a619
    • I
      [PATCH] sched: TASK_NONINTERACTIVE · d79fc0fc
      Ingo Molnar 提交于
      This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
      used by blocking points to mark the task's wait as "non-interactive".  This
      does not mean the task will be considered a CPU-hog - the wait will simply
      not have an effect on the waiting task's priority - positive or negative
      alike.  Right now only pipe_wait() will make use of it, because it's a
      common source of not-so-interactive waits (kernel compilation jobs, etc.).
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d79fc0fc
    • I
      [PATCH] sched cleanups · 95cdf3b7
      Ingo Molnar 提交于
      whitespace cleanups.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      95cdf3b7
    • M
      [PATCH] sched: make idlest_group/cpu cpus_allowed-aware · da5a5522
      M.Baris Demiray 提交于
      Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
      them return only the groups that have allowed CPUs and allowed CPUs
      respectively.
      Signed-off-by: NM.Baris Demiray <baris@labristeknoloji.com>
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      da5a5522
    • C
      [PATCH] sched: run SCHED_NORMAL tasks with real time tasks on SMT siblings · fc38ed75
      Con Kolivas 提交于
      The hyperthread aware nice handling currently puts to sleep any non real
      time task when a real time task is running on its sibling cpu.  This can
      lead to prolonged starvation by having the non real time task pegged to the
      cpu with load balancing not pulling that task away.
      
      Currently we force lower priority hyperthread tasks to run a percentage of
      time difference based on timeslice differences which is meaningless when
      comparing real time tasks to SCHED_NORMAL tasks.  We can allow non real
      time tasks to run with real time tasks on the sibling up to per_cpu_gain%
      if we use jiffies as a counter.
      
      Cleanups and micro-optimisations to the relevant code section should make
      it more understandable as well.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fc38ed75
    • I
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar 提交于
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset of the spinlock code - all the rest that can be generalized now
      lives in the generic headers:
      
       include/asm-i386/spinlock_types.h       |   16
       include/asm-x86_64/spinlock_types.h     |   16
      
      I have also split up the various spinlock variants into separate files,
      making it easier to see which does what. The new layout is:
      
         SMP                         |  UP
         ----------------------------|-----------------------------------
         asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
         linux/spinlock_types.h      |  linux/spinlock_types.h
         asm/spinlock_smp.h          |  linux/spinlock_up.h
         linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
         linux/spinlock.h            |  linux/spinlock.h
      
      /*
       * here's the role of the various spinlock/rwlock related include files:
       *
       * on SMP builds:
       *
       *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
       *                        initializers
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
       *                        implementations, mostly inline assembly code
       *
       *   (also included on UP-debug builds:)
       *
       *  linux/spinlock_api_smp.h:
       *                        contains the prototypes for the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       *
       * on UP builds:
       *
       *  linux/spinlock_type_up.h:
       *                        contains the generic, simplified UP spinlock type.
       *                        (which is an empty structure on non-debug builds)
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  linux/spinlock_up.h:
       *                        contains the __raw_spin_*()/etc. version of UP
       *                        builds. (which are NOPs on non-debug, non-preempt
       *                        builds)
       *
       *   (included on UP-non-debug builds:)
       *
       *  linux/spinlock_api_up.h:
       *                        builds the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       */
      
      All SMP and UP architectures are converted by this patch.
      
      arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
      crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
      be mostly fine.
      
      From: Grant Grundler <grundler@parisc-linux.org>
      
        Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
        Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
        non-SMP kernels.  That should be trivial to fix up later if necessary.
      
        I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
        some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
        are well tested and contained entirely inside arch specific code.  I do NOT
        expect any new issues to arise with them.
      
       If someone does ever need to use debug/metrics with them, then they will
        need to unravel this hairball between spinlocks, atomic ops, and bit ops
        that exist only because parisc has exactly one atomic instruction: LDCW
        (load and clear word).
      
      From: "Luck, Tony" <tony.luck@intel.com>
      
         ia64 fix
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjanv@infradead.org>
      Signed-off-by: NGrant Grundler <grundler@parisc-linux.org>
      Cc: Matthew Wilcox <willy@debian.org>
      Signed-off-by: NHirokazu Takata <takata@linux-m32r.org>
      Signed-off-by: NMikael Pettersson <mikpe@csd.uu.se>
      Signed-off-by: NBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fb1c8f93
  15. 10 9月, 2005 1 次提交
  16. 08 9月, 2005 2 次提交
  17. 07 9月, 2005 1 次提交
  18. 19 8月, 2005 1 次提交
  19. 27 7月, 2005 1 次提交