1. 22 3月, 2012 1 次提交
  2. 01 2月, 2012 1 次提交
    • B
      drm/nouveau/gem: fix fence_sync race / oops · 525895ba
      Ben Skeggs 提交于
      Due to a race it was possible for a fence to be destroyed while another
      thread was trying to synchronise with it.  If this happened in the fallback
      non-semaphore path, it lead to the following oops due to fence->channel
      being NULL.
      
      BUG: unable to handle kernel NULL pointer dereference at   (null)
      IP: [<fa9632ce>] nouveau_fence_update+0xe/0xe0 [nouveau]
      *pde = a649c067
      SMP
      Modules linked in: fuse nouveau(O) ttm(O) drm_kms_helper(O) drm(O) mxm_wmi video wmi netconsole configfs lockd bnep bluetooth rfkill ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack ip6table_filter ip6_tables snd_hda_codec_realtek snd_hda_intel snd_hda_cobinfmt_misc uinput ata_generic pata_acpi pata_aet2c_algo_bit i2c_core [last unloaded: wmi]
      
      Pid: 2255, comm: gnome-shell Tainted: G           O 3.2.0-0.rc5.git0.1.fc17.i686 #1 System manufacturer System Product Name/M2A-VM
      EIP: 0060:[<fa9632ce>] EFLAGS: 00010296 CPU: 1
      EIP is at nouveau_fence_update+0xe/0xe0 [nouveau]
      EAX: 00000000 EBX: ddfc6dd0 ECX: dd111580 EDX: 00000000
      ESI: 00003e80 EDI: dd111580 EBP: dd121d00 ESP: dd121ce8
       DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
      Process gnome-shell (pid: 2255, ti=dd120000 task=dd111580 task.ti=dd120000)
      Stack:
       7dc86c76 00000000 00003e80 ddfc6dd0 00003e80 dd111580 dd121d0c fa96371f
       00000000 dd121d3c fa963773 dd111580 01000246 000ec53d 00000000 ddfc6dd0
       00001f40 00000000 ddfc6dd0 00000010 dc7df840 dd121d6c fa9639a0 00000000
      Call Trace:
       [<fa96371f>] __nouveau_fence_signalled+0x1f/0x30 [nouveau]
       [<fa963773>] __nouveau_fence_wait+0x43/0xd0 [nouveau]
       [<fa9639a0>] nouveau_fence_sync+0x1a0/0x1c0 [nouveau]
       [<fa964046>] validate_list+0x176/0x300 [nouveau]
       [<f7d9c9c0>] ? ttm_bo_mem_put+0x30/0x30 [ttm]
       [<fa964b8a>] nouveau_gem_ioctl_pushbuf+0x48a/0xfd0 [nouveau]
       [<c0406481>] ? die+0x31/0x80
       [<f7c93d98>] drm_ioctl+0x388/0x490 [drm]
       [<c0406481>] ? die+0x31/0x80
       [<fa964700>] ? nouveau_gem_ioctl_new+0x150/0x150 [nouveau]
       [<c0635c7b>] ? file_has_perm+0xcb/0xe0
       [<f7c93a10>] ? drm_copy_field+0x80/0x80 [drm]
       [<c0564f56>] do_vfs_ioctl+0x86/0x5b0
       [<c0406481>] ? die+0x31/0x80
       [<c0635f22>] ? selinux_file_ioctl+0x62/0x130
       [<c0554f30>] ? fget_light+0x30/0x340
       [<c05654ef>] sys_ioctl+0x6f/0x80
       [<c099e3a4>] syscall_call+0x7/0xb
       [<c0406481>] ? die+0x31/0x80
       [<c0406481>] ? die+0x31/0x80
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      Cc: stable@vger.kernel.org
      525895ba
  3. 28 10月, 2011 1 次提交
  4. 01 9月, 2011 1 次提交
    • M
      drm/ttm: add a way to bo_wait for either the last read or last write · dfadbbdb
      Marek Olšák 提交于
      Sometimes we want to know whether a buffer is busy and wait for it (bo_wait).
      However, sometimes it would be more useful to be able to query whether
      a buffer is busy and being either read or written, and wait until it's stopped
      being either read or written. The point of this is to be able to avoid
      unnecessary waiting, e.g. if a GPU has written something to a buffer and is now
      reading that buffer, and a CPU wants to map that buffer for read, it needs to
      only wait for the last write. If there were no write, there wouldn't be any
      waiting needed.
      
      This, or course, requires user space drivers to send read/write flags
      with each relocation (like we have read/write domains in radeon, so we can
      actually use those for something useful now).
      
      Now how this patch works:
      
      The read/write flags should passed to ttm_validate_buffer. TTM maintains
      separate sync objects of the last read and write for each buffer, in addition
      to the sync object of the last use of a buffer. ttm_bo_wait then operates
      with one the sync objects.
      Signed-off-by: NMarek Olšák <maraeo@gmail.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      dfadbbdb
  5. 27 6月, 2011 1 次提交
  6. 23 6月, 2011 8 次提交
  7. 05 4月, 2011 1 次提交
  8. 14 3月, 2011 1 次提交
    • M
      drm/nouveau: properly handle pushbuffer check failures · 7fa0cba2
      Marcin Slusarz 提交于
      When "buffer in list" check does not pass, don't free validation lists - they were
      not initialized yet.
      
      Fixes this oops:
      
      [drm] nouveau 0000:02:00.0: push 105 buffer not in list
      BUG: unable to handle kernel NULL pointer dereference at 000000000000057c
      IP: [<ffffffff81236aa4>] do_raw_spin_lock+0x14/0x13c
      PGD 1ac6cb067 PUD 1aaa52067 PMD 0
      CPU 0
      Modules linked in: nouveau ttm drm_kms_helper snd_hda_codec_realtek snd_hda_intel snd_hda_codec
      
      Pid: 6265, comm: OilRush_x86 Not tainted 2.6.38-rc6-nv+ #632 System manufacturer System Product Name/P6T SE
      RIP: 0010:[<ffffffff81236aa4>]  [<ffffffff81236aa4>] do_raw_spin_lock+0x14/0x13c
      (...)
      Process OilRush_x86 (pid: 6265, threadinfo ffff8801a6aee000, task ffff8801a26c0000)
       0000000000000000 ffff8801ac74c618 0000000000000000 0000000000000578
       0000000000000000 ffff8801ac74c618 0000000000000000 ffff8801bd9d0000
       [<ffffffff81417f78>] _raw_spin_lock+0x1e/0x22
       [<ffffffffa00a2746>] nouveau_bo_fence+0x2e/0x60 [nouveau]
       [<ffffffffa00a540b>] validate_fini_list+0x35/0xeb [nouveau]
       [<ffffffffa00a54d3>] validate_fini+0x12/0x31 [nouveau]
       [<ffffffffa00a6386>] nouveau_gem_ioctl_pushbuf+0xe94/0xf6b [nouveau]
       [<ffffffff8141ac56>] ? sub_preempt_count+0x9e/0xb2
       [<ffffffff81417e94>] ? _raw_spin_unlock_irqrestore+0x30/0x4d
       [<ffffffff8105dea2>] ? __wake_up+0x3f/0x48
       [<ffffffff812aebb4>] drm_ioctl+0x289/0x361
       [<ffffffff8141ac56>] ? sub_preempt_count+0x9e/0xb2
       [<ffffffffa00a54f2>] ? nouveau_gem_ioctl_pushbuf+0x0/0xf6b [nouveau]
       [<ffffffff8141ac56>] ? sub_preempt_count+0x9e/0xb2
       [<ffffffffa010caa2>] nouveau_compat_ioctl+0x16/0x1c [nouveau]
       [<ffffffff81142c0d>] compat_sys_ioctl+0x1c8/0x12d7
       [<ffffffff814179ca>] ? trace_hardirqs_off_thunk+0x3a/0x6c
       [<ffffffff81058099>] sysenter_dispatch+0x7/0x30
       [<ffffffff8141798e>] ? trace_hardirqs_on_thunk+0x3a/0x3c
      RIP  [<ffffffff81236aa4>] do_raw_spin_lock+0x14/0x13c
       RSP <ffff8801a6aefb88>
      ---[ end trace 0014d5d93e6147e1 ]---
      
      Additionally, don't call validate_fini twice in case of validation failure.
      Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com>
      Signed-off-by: NMaarten Maathuis <madman2003@gmail.com>
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      7fa0cba2
  9. 25 2月, 2011 3 次提交
  10. 08 12月, 2010 2 次提交
  11. 03 12月, 2010 5 次提交
  12. 22 11月, 2010 1 次提交
    • T
      drm/ttm/radeon/nouveau: Kill the bo lock in favour of a bo device fence_lock · 702adba2
      Thomas Hellstrom 提交于
      The bo lock used only to protect the bo sync object members, and since it
      is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
      Replace it with a per-device lock that protects the sync object members on
      *all* bos. Reading and setting these members will always be very quick, so
      the risc of heavy lock contention is microscopic. Note that waiting for
      sync objects will always take place outside of this lock.
      
      The bo device fence lock will eventually be replaced with a seqlock /
      rcu mechanism so we can determine that a bo is idle under a
      rcu / read seqlock.
      
      However this change will allow us to batch fencing and unreserving of
      buffers with a minimal amount of locking.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Reviewed-by: NJerome Glisse <j.glisse@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      702adba2
  13. 18 11月, 2010 1 次提交
  14. 05 10月, 2010 1 次提交
  15. 01 10月, 2010 1 次提交
    • D
      drm/gem: handlecount isn't really a kref so don't make it one. · 29d08b3e
      Dave Airlie 提交于
      There were lots of places being inconsistent since handle count
      looked like a kref but it really wasn't.
      
      Fix this my just making handle count an atomic on the object,
      and have it increase the normal object kref.
      
      Now i915/radeon/nouveau drivers can drop the normal reference on
      userspace object creation, and have the handle hold it.
      
      This patch fixes a memory leak or corruption on unload, because
      the driver had no way of knowing if a handle had been actually
      added for this object, and the fbcon object needed to know this
      to clean itself up properly.
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      29d08b3e
  16. 03 9月, 2010 1 次提交
  17. 27 8月, 2010 1 次提交
  18. 26 8月, 2010 1 次提交
  19. 17 8月, 2010 1 次提交
    • B
      drm/nouveau: fix race condition when under memory pressure · 415e6186
      Ben Skeggs 提交于
      When VRAM is running out it's possible that the client's push buffers get
      evicted to main memory.  When they're validated back in, the GPU may
      be used for the copy back to VRAM, but the existing synchronisation code
      only deals with inter-channel sync, not sync between PFIFO and PGRAPH on
      the same channel.  This leads to PFIFO fetching from command buffers that
      haven't quite been copied by PGRAPH yet.
      
      This patch marks push buffers as so, and forces any GPU-assisted buffer
      moves to be done on a different channel, which triggers the correct
      synchronisation to happen before we submit them.
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      415e6186
  20. 10 8月, 2010 1 次提交
  21. 13 7月, 2010 2 次提交
  22. 20 4月, 2010 1 次提交
  23. 09 4月, 2010 1 次提交
  24. 08 4月, 2010 1 次提交
  25. 25 2月, 2010 1 次提交