- 22 11月, 2010 4 次提交
-
-
由 Thomas Hellstrom 提交于
Drastically reduce the number of spin lock / unlock operations by performing unreserving and fencing under global locks. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The bo lock used only to protect the bo sync object members, and since it is a per bo lock, fencing a buffer list will see a lot of locks and unlocks. Replace it with a per-device lock that protects the sync object members on *all* bos. Reading and setting these members will always be very quick, so the risc of heavy lock contention is microscopic. Note that waiting for sync objects will always take place outside of this lock. The bo device fence lock will eventually be replaced with a seqlock / rcu mechanism so we can determine that a bo is idle under a rcu / read seqlock. However this change will allow us to batch fencing and unreserving of buffers with a minimal amount of locking. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Avoid the ttm_bo_unreserve() spinlocks by calling ttm_eu_backoff_reservation_locked under the lru spinlock. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Dave Airlie 提交于
Makes it possible to reserve a list of buffer objects with a single spin lock / unlock if there is no contention. Should improve cpu usage on SMP kernels. v2: Initialize private list members on reserve and don't call ttm_bo_list_ref_sub() with zero put_count. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 07 12月, 2009 1 次提交
-
-
由 Thomas Hellstrom 提交于
Utilities to reserve, unreserve and fence a list of TTM buffer objects in a deadlock-safe manner. Used by the vmwgfx driver. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-