1. 17 3月, 2008 5 次提交
    • A
      drm/radeon: fixup RV550 chip family · 16d3be46
      Alex Deucher 提交于
      This fixes up the RV550 chips which are based on RV515, not RV530.
      It also adds another RS690 PCI ID.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      16d3be46
    • T
      drm/via: attempt again to stabilise the AGP DMA command submission. · f0fb6d77
      Thomas Hellstrom 提交于
      It's worth remembering that all new bright ideas on how to make this command reader work properly and according to docs will probably fail :( Bring in some old code.
      
      Also allow a larger SG-DMA download stride, and remove unnecessary waits for
      command regulators pauses.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      f0fb6d77
    • M
      drm: Fix race that can lockup the kernel · 9df5808c
      Mike Isely 提交于
      The i915_vblank_swap() function schedules an automatic buffer swap
      upon receipt of the vertical sync interrupt.  Such an operation is
      lengthy so it can't be allowed to happen in normal interrupt context,
      thus the DRM implements this by scheduling the work in a kernel
      softirq-scheduled tasklet.  In order for the buffer swap to work
      safely, the DRM's central lock must be taken, via a call to
      drm_lock_take() located in drivers/char/drm/drm_irq.c within the
      function drm_locked_tasklet_func().  The lock-taking logic uses a
      non-interrupt-blocking spinlock to implement the manipulations needed
      to take the lock.  This semantic would be safe if all attempts to use
      the spinlock only happen from process context.  However this buffer
      swap happens from softirq context which is really a form of interrupt
      context.  Thus we have an unsafe situation, in that
      drm_locked_tasklet_func() can block on a spinlock already taken by a
      thread in process context which will never get scheduled again because
      of the blocked softirq tasklet.  This wedges the kernel hard.
      
      To trigger this bug, run a dual-head cloned mode configuration which
      uses the i915 drm, then execute an opengl application which
      synchronizes buffer swaps against the vertical sync interrupt.  In my
      testing, a lockup always results after running anywhere from 5 minutes
      to an hour and a half.  I believe dual-head is needed to really
      trigger the problem because then the vertical sync interrupt handling
      is no longer predictable (due to being interrupt-sourced from two
      different heads running at different speeds).  This raises the
      probability of the tasklet trying to run while the userspace DRI is
      doing things to the GPU (and manipulating the DRM lock).
      
      The fix is to change the relevant spinlock semantics to be the
      interrupt-blocking form.  After this change I am no longer able to
      trigger the lockup; the longest test run so far was 20 hours (test
      stopped after that point).
      
      Note: I have examined the places where this spinlock is being
      employed; all are reasonably short bounded sequences and should be
      suitable for interrupts being blocked without impacting overall kernel
      interrupt response latency.
      Signed-off-by: NMike Isely <isely@pobox.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      9df5808c
    • L
      Linux 2.6.25-rc6 · a978b30a
      Linus Torvalds 提交于
      a978b30a
    • L
      Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/kyle/parisc-2.6 · 69d1d523
      Linus Torvalds 提交于
      * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/kyle/parisc-2.6:
        [PARISC] make ptr_to_pide() static
        [PARISC] head.S: section mismatch fixes
        [PARISC] add back Crestone Peak cpu
        [PARISC] futex: special case cmpxchg NULL in kernel space
        [PARISC] clean up show_stack
        [PARISC] add pa8900 CPUs to hardware inventory
        [PARISC] clean up include/asm-parisc/elf.h
        [PARISC] move defconfig to arch/parisc/configs/
        [PARISC] add back AD1889 MAINTAINERS entry
        [PARISC] pdc_console: fix bizarre panic on boot
        [PARISC] dump_stack in show_regs
        [PARISC] pdc_stable: fix compile errors
        [PARISC] remove unused pdc_iodc_printf function
        [PARISC] bump __NR_syscalls
        [PARISC] unbreak pgalloc.h
        [PARISC] move VMALLOC_* definitions to fixmap.h
        [PARISC] wire up timerfd syscalls
        [PARISC] remove old timerfd syscall
      69d1d523
  2. 16 3月, 2008 21 次提交
  3. 15 3月, 2008 10 次提交
    • I
      sched: simplify sched_slice() · 6a6029b8
      Ingo Molnar 提交于
      Use the existing calc_delta_mine() calculation for sched_slice(). This
      saves a divide and simplifies the code because we share it with the
      other /cfs_rq->load users.
      
      It also improves code size:
      
            text    data     bss     dec     hex filename
           42659    2740     144   45543    b1e7 sched.o.before
           42093    2740     144   44977    afb1 sched.o.after
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      6a6029b8
    • I
      sched: fix fair sleepers · e22ecef1
      Ingo Molnar 提交于
      Fair sleepers need to scale their latency target down by runqueue
      weight. Otherwise busy systems will gain ever larger sleep bonus.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      e22ecef1
    • P
      sched: fix overload performance: buddy wakeups · aa2ac252
      Peter Zijlstra 提交于
      Currently we schedule to the leftmost task in the runqueue. When the
      runtimes are very short because of some server/client ping-pong,
      especially in over-saturated workloads, this will cycle through all
      tasks trashing the cache.
      
      Reduce cache trashing by keeping dependent tasks together by running
      newly woken tasks first. However, by not running the leftmost task first
      we could starve tasks because the wakee can gain unlimited runtime.
      
      Therefore we only run the wakee if its within a small
      (wakeup_granularity) window of the leftmost task. This preserves
      fairness, but does alternate server/client task groups.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      aa2ac252
    • I
      sched: fix calc_delta_mine() · 27d11726
      Ingo Molnar 提交于
      lw->weight can be 0 for a short time during bootup.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      27d11726
    • I
      sched: fix update_load_add()/sub() · e89996ae
      Ingo Molnar 提交于
      Clear the cached inverse value when updating load. This is needed for
      calc_delta_mine() to work correctly when using the rq load.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      e89996ae
    • P
      sched: min_vruntime fix · 3fe69747
      Peter Zijlstra 提交于
      Current min_vruntime tracking is incorrect and will cause serious
      problems when we don't run the leftmost task for some reason.
      
      min_vruntime does two things; 1) it's used to determine a forward
      direction when the u64 vruntime wraps, 2) it's used to track the
      leftmost vruntime to position newly enqueued tasks from.
      
      The current logic advances min_vruntime whenever the current task's
      vruntime advance. Because the current task may pass the leftmost task
      still waiting we're failing the second goal. This causes new tasks to be
      placed too far ahead and thus penalizes their runtime.
      
      Fix this by making min_vruntime the min_vruntime of the waiting tasks by
      tracking it in enqueue/dequeue, and compare against current's vruntime
      to obtain the absolute minimum when placing new tasks.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3fe69747
    • H
      sched: fix race in schedule() · 0e1f3483
      Hiroshi Shimamoto 提交于
      Fix a hard to trigger crash seen in the -rt kernel that also affects
      the vanilla scheduler.
      
      There is a race condition between schedule() and some dequeue/enqueue
      functions; rt_mutex_setprio(), __setscheduler() and sched_move_task().
      
      When scheduling to idle, idle_balance() is called to pull tasks from
      other busy processor. It might drop the rq lock. It means that those 3
      functions encounter on_rq=0 and running=1. The current task should be
      put when running.
      
      Here is a possible scenario:
      
         CPU0                               CPU1
          |                              schedule()
          |                              ->deactivate_task()
          |                              ->idle_balance()
          |                              -->load_balance_newidle()
      rt_mutex_setprio()                     |
          |                              --->double_lock_balance()
          *get lock                          *rel lock
          * on_rq=0, ruuning=1               |
          * sched_class is changed           |
          *rel lock                          *get lock
          :                                  |
                                             :
                                         ->put_prev_task_rt()
                                         ->pick_next_task_fair()
                                             => panic
      
      The current process of CPU1(P1) is scheduling. Deactivated P1, and the
      scheduler looks for another process on other CPU's runqueue because CPU1
      will be idle. idle_balance(), load_balance_newidle() and
      double_lock_balance() are called and double_lock_balance() could drop
      the rq lock. On the other hand, CPU0 is trying to boost the priority of
      P1. The result of boosting only P1's prio and sched_class are changed to
      RT. The sched entities of P1 and P1's group are never put. It makes
      cfs_rq invalid, because the cfs_rq has curr and no leaf, but
      pick_next_task_fair() is called, then the kernel panics.
      Signed-off-by: NHiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0e1f3483
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6 · 4faa8496
      Linus Torvalds 提交于
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6:
        firewire: fw-ohci: shut up false compiler warning on PPC32
        firewire: fw-ohci: use dma_alloc_coherent for ar_buffer
        ieee1394: sbp2: fix for SYM13FW500 bridge (Datafab disk)
        firewire: fw-sbp2: fix for SYM13FW500 bridge (Datafab disk)
        firewire: update Kconfig help text
        firewire: warn on fatal condition in topology code
        firewire: fw-sbp2: set single-phase retry_limit
        firewire: fw-ohci: Apple UniNorth 1st generation support
        firewire: fw-ohci: PPC PMac platform code
        firewire: endianess annotations
        firewire: endianess fix
      4faa8496
    • J
      nfsd: fix oops on access from high-numbered ports · b663c6fd
      J. Bruce Fields 提交于
      This bug was always here, but before my commit 6fa02839
      ("recheck for secure ports in fh_verify"), it could only be triggered by
      failure of a kmalloc().  After that commit it could be triggered by a
      client making a request from a non-reserved port for access to an export
      marked "secure".  (Exports are "secure" by default.)
      
      The result is a struct svc_export with a reference count one too low,
      resulting in likely oopses next time the export is accessed.
      
      The reference counting here is not straightforward; a later patch will
      clean up fh_verify().
      
      Thanks to Lukas Hejtmanek for the bug report and followup.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Cc: Lukas Hejtmanek <xhejtman@ics.muni.cz>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b663c6fd
    • M
      struct export_operations: adjust comments to match current members · 9b89ca7a
      Marc Dionne 提交于
      The comments in the definition of struct export_operations don't match the
      current members.
      
      Add a comment for the 2 new functions and remove 2 comments for unused ones.
      Signed-off-by: NMarc Dionne <marc.c.dionne@gmail.com>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9b89ca7a
  4. 14 3月, 2008 4 次提交