1. 02 2月, 2006 5 次提交
  2. 17 1月, 2006 8 次提交
  3. 14 1月, 2006 1 次提交
  4. 13 1月, 2006 3 次提交
  5. 11 1月, 2006 1 次提交
  6. 09 1月, 2006 1 次提交
  7. 09 11月, 2005 2 次提交
    • N
      [PATCH] sched: resched and cpu_idle rework · 64c7c8f8
      Nick Piggin 提交于
      Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
      confusion, and make their semantics rigid.  Improves efficiency of
      resched_task and some cpu_idle routines.
      
      * In resched_task:
      - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
        and as we hold it during resched_task, then there is no need for an
        atomic test and set there. The only other time this should be set is
        when the task's quantum expires, in the timer interrupt - this is
        protected against because the rq lock is irq-safe.
      
      - If TIF_NEED_RESCHED is set, then we don't need to do anything. It
        won't get unset until the task get's schedule()d off.
      
      - If we are running on the same CPU as the task we resched, then set
        TIF_NEED_RESCHED and no further action is required.
      
      - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
        after TIF_NEED_RESCHED has been set, then we need to send an IPI.
      
      Using these rules, we are able to remove the test and set operation in
      resched_task, and make clear the previously vague semantics of
      POLLING_NRFLAG.
      
      * In idle routines:
      - Enter cpu_idle with preempt disabled. When the need_resched() condition
        becomes true, explicitly call schedule(). This makes things a bit clearer
        (IMO), but haven't updated all architectures yet.
      
      - Many do a test and clear of TIF_NEED_RESCHED for some reason. According
        to the resched_task rules, this isn't needed (and actually breaks the
        assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
        held). So remove that. Generally one less locked memory op when switching
        to the idle thread.
      
      - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
        most polling idle loops. The above resched_task semantics allow it to be
        set until before the last time need_resched() is checked before going into
        a halt requiring interrupt wakeup.
      
        Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
        can be always left set, completely eliminating resched IPIs when rescheduling
        the idle task.
      
        POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Con Kolivas <kernel@kolivas.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      64c7c8f8
    • N
      [PATCH] sched: disable preempt in idle tasks · 5bfb5d69
      Nick Piggin 提交于
      Run idle threads with preempt disabled.
      
      Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
      How did it ever work before?
      
      Might fix the CPU hotplugging hang which Nigel Cunningham noted.
      
      We think the bug hits if the idle thread is preempted after checking
      need_resched() and before going to sleep, then the CPU offlined.
      
      After calling stop_machine_run, the CPU eventually returns from preemption and
      into the idle thread and goes to sleep.  The CPU will continue executing
      previous idle and have no chance to call play_dead.
      
      By disabling preemption until we are ready to explicitly schedule, this bug is
      fixed and the idle threads generally become more robust.
      
      From: alexs <ashepard@u.washington.edu>
      
        PPC build fix
      
      From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
      
        MIPS build fix
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NYoichi Yuasa <yuasa@hh.iij4u.or.jp>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5bfb5d69
  8. 07 11月, 2005 5 次提交
  9. 31 10月, 2005 3 次提交
  10. 30 10月, 2005 4 次提交
    • H
      [PATCH] mm: i386 sh sh64 ready for split ptlock · 60ec5585
      Hugh Dickins 提交于
      Use pte_offset_map_lock, instead of pte_offset_map (or inappropriate
      pte_offset_kernel) and mm-wide page_table_lock, in sundry arch places.
      
      The i386 vm86 mark_screen_rdonly: yes, there was and is an assumption that the
      screen fits inside the one page table, as indeed it does.
      
      The sh __do_page_fault: which handles both kernel faults (without lock) and
      user mm faults (locked - though it set_pte without locking before).
      
      The sh64 flush_cache_range and helpers: which wrongly thought callers held
      page_table_lock before (only its tlb_start_vma did, and no longer does so);
      moved the flush loop down, and adjusted the large versus small range decision
      to consider a range which spans page tables as large.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      60ec5585
    • H
      [PATCH] mm: init_mm without ptlock · 872fec16
      Hugh Dickins 提交于
      First step in pushing down the page_table_lock.  init_mm.page_table_lock has
      been used throughout the architectures (usually for ioremap): not to serialize
      kernel address space allocation (that's usually vmlist_lock), but because
      pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
      
      Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
      architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
      and drop it when allocating a new one, to check lest a racing task already
      did.  Similarly no page_table_lock in vmalloc's map_vm_area.
      
      Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
      user mms, which are converted only by a later patch, for now they have to lock
      differently according to whether or not it's init_mm.
      
      If sources get muddled, there's a danger that an arch source taking
      init_mm.page_table_lock will be mixed with common source also taking it (or
      neither take it).  So break the rules and make another change, which should
      break the build for such a mismatch: remove the redundant mm arg from
      pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
      
      Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
      used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
      pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
      map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
      took page_table_lock for no good reason.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      872fec16
    • H
      [PATCH] mm: sh64 hugetlbpage.c · 147efea8
      Hugh Dickins 提交于
      The sh64 hugetlbpage.c seems to be erroneous, left over from a bygone age,
      clashing with the common hugetlb.c.  Replace it by a copy of the sh
      hugetlbpage.c.  Except, delete that mk_pte_huge macro neither uses.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      147efea8
    • R
      Create platform_device.h to contain all the platform device details. · d052d1be
      Russell King 提交于
      Convert everyone who uses platform_bus_type to include
      linux/platform_device.h.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
      d052d1be
  11. 28 10月, 2005 1 次提交
  12. 14 10月, 2005 1 次提交
  13. 12 9月, 2005 1 次提交
    • S
      kbuild: rename prepare to archprepare to fix dependency chain · 5bb78269
      Sam Ravnborg 提交于
      When introducing the generic asm-offsets.h support the dependency
      chain for the prepare targets was changed. All build scripts expecting
      include/asm/asm-offsets.h to be made when using the prepare target would broke.
      With the limited number of prepare targets left in arch Makefiles
      the trivial solution was to introduce a new arch specific target: archprepare
      
      The dependency chain looks like this now:
      
      prepare
        |
        +--> prepare0
               |
               +--> archprepare
                      |
      		+--> scripts_basic
                      +--> prepare1
                             |
                             +---> prepare2
                                     |
                                     +--> prepare3
      
      So prepare 3 is processed before prepare2 etc.
      This guaantees that the asm symlink, version.h, scripts_basic
      are all updated before archprepare is processed.
      
      prepare0 which build the asm-offsets.h file will need the
      actions performed by archprepare.
      
      The head target is now named prepare, because users scripts will most
      likely use that target, but prepare-all has been kept for compatibility.
      Updated Documentation/kbuild/makefiles.txt.
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      5bb78269
  14. 11 9月, 2005 3 次提交
  15. 10 9月, 2005 1 次提交