1. 27 6月, 2011 1 次提交
  2. 23 6月, 2011 8 次提交
  3. 05 4月, 2011 1 次提交
  4. 14 3月, 2011 1 次提交
    • M
      drm/nouveau: properly handle pushbuffer check failures · 7fa0cba2
      Marcin Slusarz 提交于
      When "buffer in list" check does not pass, don't free validation lists - they were
      not initialized yet.
      
      Fixes this oops:
      
      [drm] nouveau 0000:02:00.0: push 105 buffer not in list
      BUG: unable to handle kernel NULL pointer dereference at 000000000000057c
      IP: [<ffffffff81236aa4>] do_raw_spin_lock+0x14/0x13c
      PGD 1ac6cb067 PUD 1aaa52067 PMD 0
      CPU 0
      Modules linked in: nouveau ttm drm_kms_helper snd_hda_codec_realtek snd_hda_intel snd_hda_codec
      
      Pid: 6265, comm: OilRush_x86 Not tainted 2.6.38-rc6-nv+ #632 System manufacturer System Product Name/P6T SE
      RIP: 0010:[<ffffffff81236aa4>]  [<ffffffff81236aa4>] do_raw_spin_lock+0x14/0x13c
      (...)
      Process OilRush_x86 (pid: 6265, threadinfo ffff8801a6aee000, task ffff8801a26c0000)
       0000000000000000 ffff8801ac74c618 0000000000000000 0000000000000578
       0000000000000000 ffff8801ac74c618 0000000000000000 ffff8801bd9d0000
       [<ffffffff81417f78>] _raw_spin_lock+0x1e/0x22
       [<ffffffffa00a2746>] nouveau_bo_fence+0x2e/0x60 [nouveau]
       [<ffffffffa00a540b>] validate_fini_list+0x35/0xeb [nouveau]
       [<ffffffffa00a54d3>] validate_fini+0x12/0x31 [nouveau]
       [<ffffffffa00a6386>] nouveau_gem_ioctl_pushbuf+0xe94/0xf6b [nouveau]
       [<ffffffff8141ac56>] ? sub_preempt_count+0x9e/0xb2
       [<ffffffff81417e94>] ? _raw_spin_unlock_irqrestore+0x30/0x4d
       [<ffffffff8105dea2>] ? __wake_up+0x3f/0x48
       [<ffffffff812aebb4>] drm_ioctl+0x289/0x361
       [<ffffffff8141ac56>] ? sub_preempt_count+0x9e/0xb2
       [<ffffffffa00a54f2>] ? nouveau_gem_ioctl_pushbuf+0x0/0xf6b [nouveau]
       [<ffffffff8141ac56>] ? sub_preempt_count+0x9e/0xb2
       [<ffffffffa010caa2>] nouveau_compat_ioctl+0x16/0x1c [nouveau]
       [<ffffffff81142c0d>] compat_sys_ioctl+0x1c8/0x12d7
       [<ffffffff814179ca>] ? trace_hardirqs_off_thunk+0x3a/0x6c
       [<ffffffff81058099>] sysenter_dispatch+0x7/0x30
       [<ffffffff8141798e>] ? trace_hardirqs_on_thunk+0x3a/0x3c
      RIP  [<ffffffff81236aa4>] do_raw_spin_lock+0x14/0x13c
       RSP <ffff8801a6aefb88>
      ---[ end trace 0014d5d93e6147e1 ]---
      
      Additionally, don't call validate_fini twice in case of validation failure.
      Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com>
      Signed-off-by: NMaarten Maathuis <madman2003@gmail.com>
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      7fa0cba2
  5. 25 2月, 2011 3 次提交
  6. 08 12月, 2010 2 次提交
  7. 03 12月, 2010 5 次提交
  8. 22 11月, 2010 1 次提交
    • T
      drm/ttm/radeon/nouveau: Kill the bo lock in favour of a bo device fence_lock · 702adba2
      Thomas Hellstrom 提交于
      The bo lock used only to protect the bo sync object members, and since it
      is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
      Replace it with a per-device lock that protects the sync object members on
      *all* bos. Reading and setting these members will always be very quick, so
      the risc of heavy lock contention is microscopic. Note that waiting for
      sync objects will always take place outside of this lock.
      
      The bo device fence lock will eventually be replaced with a seqlock /
      rcu mechanism so we can determine that a bo is idle under a
      rcu / read seqlock.
      
      However this change will allow us to batch fencing and unreserving of
      buffers with a minimal amount of locking.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Reviewed-by: NJerome Glisse <j.glisse@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      702adba2
  9. 18 11月, 2010 1 次提交
  10. 05 10月, 2010 1 次提交
  11. 01 10月, 2010 1 次提交
    • D
      drm/gem: handlecount isn't really a kref so don't make it one. · 29d08b3e
      Dave Airlie 提交于
      There were lots of places being inconsistent since handle count
      looked like a kref but it really wasn't.
      
      Fix this my just making handle count an atomic on the object,
      and have it increase the normal object kref.
      
      Now i915/radeon/nouveau drivers can drop the normal reference on
      userspace object creation, and have the handle hold it.
      
      This patch fixes a memory leak or corruption on unload, because
      the driver had no way of knowing if a handle had been actually
      added for this object, and the fbcon object needed to know this
      to clean itself up properly.
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      29d08b3e
  12. 03 9月, 2010 1 次提交
  13. 27 8月, 2010 1 次提交
  14. 26 8月, 2010 1 次提交
  15. 17 8月, 2010 1 次提交
    • B
      drm/nouveau: fix race condition when under memory pressure · 415e6186
      Ben Skeggs 提交于
      When VRAM is running out it's possible that the client's push buffers get
      evicted to main memory.  When they're validated back in, the GPU may
      be used for the copy back to VRAM, but the existing synchronisation code
      only deals with inter-channel sync, not sync between PFIFO and PGRAPH on
      the same channel.  This leads to PFIFO fetching from command buffers that
      haven't quite been copied by PGRAPH yet.
      
      This patch marks push buffers as so, and forces any GPU-assisted buffer
      moves to be done on a different channel, which triggers the correct
      synchronisation to happen before we submit them.
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      415e6186
  16. 10 8月, 2010 1 次提交
  17. 13 7月, 2010 2 次提交
  18. 20 4月, 2010 1 次提交
  19. 09 4月, 2010 1 次提交
  20. 08 4月, 2010 1 次提交
  21. 25 2月, 2010 4 次提交
  22. 11 2月, 2010 1 次提交