- 28 6月, 2013 4 次提交
-
-
由 Maarten Lankhorst 提交于
Use lockdep_assert_held instead. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Makes lockdep a lot more useful. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Now that the code is compatible in semantics, flip the switch. Use ww_mutex instead of the homegrown implementation. ww_mutex uses -EDEADLK to signal that the caller has to back off, and -EALREADY to indicate this buffer is already held by the caller. ttm used -EAGAIN and -EDEADLK for those, respectively. So some changes were needed to handle this correctly. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
This commit converts the source of the val_seq counter to the ww_mutex api. The reservation objects are converted later, because there is still a lockdep splat in nouveau that has to resolved first. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 16 4月, 2013 1 次提交
-
-
由 Libin 提交于
(*->vm_end - *->vm_start) >> PAGE_SHIFT operation is implemented as a inline funcion vma_pages() in linux/mm.h, so using it. Signed-off-by: NLibin <huawei.libin@huawei.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 12 4月, 2013 1 次提交
-
-
由 Dave Airlie 提交于
qxl wants to use io mapping like i915 gem does, for now just export the symbols so the driver can implement atomic page maps using io mapping. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 23 2月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 08 2月, 2013 1 次提交
-
-
由 Daniel Vetter 提交于
This fixes up commit e8e89622 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Tue Dec 18 22:25:11 2012 +0100 drm/ttm: fix fence locking in ttm_buffer_object_transfer which leaves behind a might_sleep in atomic context, since the fence_lock spinlock is held over a kmalloc(GFP_KERNEL) call. The fix is to revert the above commit and only take the lock where we need it, around the call to ->sync_obj_ref. v2: Fixup things noticed by Maarten Lankhorst: - Brown paper bag locking bug. - No need for kzalloc if we clear the entire thing on the next line. - check for bo->sync_obj (totally unlikely race, but still someone else could have snuck in) and clear fbo->sync_obj if it's cleared already. Reported-by: NDave Airlie <airlied@gmail.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 21 1月, 2013 2 次提交
-
-
由 Dave Airlie 提交于
if we have a move notify callback, when moving fails, we call move notify the opposite way around, however this ends up with *mem containing the mm_node from the bo, which means we double free it. This is a follow on to the previous fix. Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Dave Airlie 提交于
When we are using memcpy to move objects around, and we fail to memcpy due to lack of memory to populate or failure to finish the copy, we don't want to destroy the mm_node that has been copied into old_copy. While working on a new kms driver that uses memcpy, if I overallocated bo's up to the memory limits, and eviction failed, then machine would oops soon after due to having an active bo with an already freed drm_mm embedded in it, freeing it a second time didn't end well. Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 15 1月, 2013 5 次提交
-
-
由 Maarten Lankhorst 提交于
All legitimate users of this function outside ttm_bo.c are gone, now it's only an implementation detail. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
由 Maarten Lankhorst 提交于
This requires re-use of the seqno, which increases fairness slightly. Instead of spinning with a new seqno every time we keep the current one, but still drop all other reservations we hold. Only when we succeed, we try to get back our other reservations again. This should increase fairness slightly as well. Changes since v1: - Increase val_seq before calling ttm_bo_reserve_slowpath_nolru and retrying to take all entries to prevent a race. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
由 Maarten Lankhorst 提交于
Instead of dropping everything, waiting for the bo to be unreserved and trying over, a better strategy would be to do a blocking wait. This can be mapped a lot better to a mutex_lock-like call. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
由 Maarten Lankhorst 提交于
With the lru lock no longer required for protecting reservations we can just do a ttm_bo_reserve_nolru on -EBUSY, and handle all errors in a single path. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
由 Maarten Lankhorst 提交于
There should no longer be assumptions that reserve will always succeed with the lru lock held, so we can safely break the whole atomic reserve/lru thing. As a bonus this fixes most lockdep annotations for reservations. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
- 08 1月, 2013 1 次提交
-
-
由 Daniel Vetter 提交于
Noticed while reviewing the fence locking in the radeon pageflip handler. v2: Instead of grabbing the bdev->fence_lock in object_transfer just move the single callsite of that function a few lines, so that it is protected by the fence_lock. Suggested by Jerome Glisse. v3: Fix typo in commit message. Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 20 12月, 2012 1 次提交
-
-
由 Maarten Lankhorst 提交于
Fix regression introduced by 85b144f8 "drm/ttm: call ttm_bo_cleanup_refs with reservation and lru lock held, v3" Slowpath ttm_bo_cleanup_refs_and_unlock accidentally tried to increase refcount on &bo->sync_obj instead of bo->sync_obj. The compiler didn't complain since sync_obj_ref takes a void pointer, so it was still valid c. This could result in lockups, memory corruptions, and warnings like these when graphics card VRAM usage is high: ------------[ cut here ]------------ WARNING: at include/linux/kref.h:42 radeon_fence_ref+0x2c/0x40() Hardware name: System Product Name Pid: 157, comm: X Not tainted 3.7.0-rc7-00520-g85b144f8-dirty #174 Call Trace: [<ffffffff81058c84>] ? warn_slowpath_common+0x74/0xb0 [<ffffffff8129273c>] ? radeon_fence_ref+0x2c/0x40 [<ffffffff8125e95c>] ? ttm_bo_cleanup_refs_and_unlock+0x18c/0x2d0 [<ffffffff8125f17c>] ? ttm_mem_evict_first+0x1dc/0x2a0 [<ffffffff81264452>] ? ttm_bo_man_get_node+0x62/0xb0 [<ffffffff8125f4ce>] ? ttm_bo_mem_space+0x28e/0x340 [<ffffffff8125fb0c>] ? ttm_bo_move_buffer+0xfc/0x170 [<ffffffff810de172>] ? kmem_cache_alloc+0xb2/0xc0 [<ffffffff8125fc15>] ? ttm_bo_validate+0x95/0x110 [<ffffffff8125ff7c>] ? ttm_bo_init+0x2ec/0x3b0 [<ffffffff8129419a>] ? radeon_bo_create+0x18a/0x200 [<ffffffff81293e80>] ? radeon_bo_clear_va+0x40/0x40 [<ffffffff812a5342>] ? radeon_gem_object_create+0x92/0x160 [<ffffffff812a575c>] ? radeon_gem_create_ioctl+0x6c/0x150 [<ffffffff812a529f>] ? radeon_gem_object_free+0x2f/0x40 [<ffffffff81246b60>] ? drm_ioctl+0x420/0x4f0 [<ffffffff812a56f0>] ? radeon_gem_pwrite_ioctl+0x20/0x20 [<ffffffff810f53a4>] ? do_vfs_ioctl+0x2e4/0x4e0 [<ffffffff810e5588>] ? vfs_read+0x118/0x160 [<ffffffff810f55ec>] ? sys_ioctl+0x4c/0xa0 [<ffffffff810e5851>] ? sys_read+0x51/0xa0 [<ffffffff814b0612>] ? system_call_fastpath+0x16/0x1b Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de> Acked-by: NPaul Menzel <paulepanter@users.sourceforge.net> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 10 12月, 2012 5 次提交
-
-
由 Maarten Lankhorst 提交于
All items on the lru list are always reservable, so this is a stupid thing to keep. Not only that, it is used in a way which would guarantee deadlocks if it were ever to be set to block on reserve. This is a lot of churn, but mostly because of the removal of the argument which can be nested arbitrarily deeply in many places. No change of code in this patch except removal of the no_wait_reserve argument, the previous patch removed the use of no_wait_reserve. v2: - Warn if -EBUSY is returned on reservation, all objects on the list should be reservable. Adjusted patch slightly due to conflicts. v3: - Focus on no_wait_reserve removal only. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Replace the goto loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation, because previously buffers on the lru list were always guaranteed to be reservable. This should work since currently ttm guarantees items on the lru are always reservable, and reserving items blockingly with some bo held are enough to cause you to run into a deadlock. Currently this is not a concern since removal off the lru list and reservations are always done with atomically, but when this guarantee no longer holds, we have to handle this situation or end up with possible deadlocks. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Replace the while loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
By removing the unlocking of lru and retaking it immediately, a race is removed where the bo is taken off the swap list or the lru list between the unlock and relock. As such the cleanup_refs code can be simplified, it will attempt to call ttm_bo_wait non-blockingly, and if it fails it will drop the locks and perform a blocking wait, or return an error if no_wait_gpu was set. The need for looping is also eliminated, since swapout and evict_mem_first will always follow the destruction path, no new fence is allowed to be attached. As far as I can see this may already have been the case, but the unlocking / relocking required a complicated loop to deal with re-reservation. Changes since v1: - Simplify no_wait_gpu case by folding it in with empty ddestroy. - Hold a reservation while calling ttm_bo_cleanup_memtype_use again. Changes since v2: - Do not remove bo from lru list while waiting Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
This requires changing the order in ttm_bo_cleanup_refs_or_queue to take the reservation first, as there is otherwise no race free way to take lru lock before fence_lock. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 28 11月, 2012 3 次提交
-
-
由 Thomas Hellstrom 提交于
Removes the need for a write lock each time we call ttm_bo_unref(). v2: Remove an unused variable. v3: Really remove the unused variable. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Also move a kref_init() out of spinlocked region Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 20 11月, 2012 9 次提交
-
-
由 Maarten Lankhorst 提交于
This is similar to other platforms that don't allow command submission to buffers locked on the cpu. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Reservation locking currently always takes place under the LRU spinlock. Hence, strictly there is no need for an atomic_cmpxchg call; we can use atomic_read followed by atomic_write since nobody else will ever reserve without the lru spinlock held. At least on Intel this should remove a locked bus cycle on successful reserve. Note that thit commit may be obsoleted by the cross-device reservation work. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The mostly used lookup+get put+potential_destroy path of TTM objects is converted to use RCU locks. This will substantially decrease the amount of locked bus cycles during normal operation. Since we use kfree_rcu to free the objects, no rcu synchronization is needed at module unload time. v2: Don't touch include/linux/kref.h v3: Adapt to kref_get_unless_zero return value change Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-By: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-By: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
vmwgfx was its only user and always sets it to the same.. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-By: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Marcin Slusarz 提交于
It's unused. Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Marcin Slusarz 提交于
It's unused. Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Marcin Slusarz 提交于
All drivers set it to 0 and nothing uses it. Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 16 11月, 2012 2 次提交
-
-
由 Akinobu Mita 提交于
It is unnecessary to disable preemption explicitly while calling copy_highpage(). Because copy_highpage() will do it again through kmap_atomic/kunmap_atomic. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Zhao Yakui 提交于
The TTM page can be allocated from high memory. In such case it is wrong to use the page_address(page) as the virtual address for the high memory page. bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=50241Signed-off-by: NZhao Yakui <yakui.zhao@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 07 11月, 2012 1 次提交
-
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 23 10月, 2012 2 次提交
-
-
由 Thomas Hellstrom 提交于
In theory, that function could release the lru lock between checking for bo on ddestroy list and a successful reserve if the bo was already reserved, and the function was called with waiting reserves allowed. However, all current reservers of a bo on the ddestroy list would atomically take the bo off the list after a successful reserve so this race should not have been hit, so no need to backport for stable. This patch also fixes a case found by Maarten Lankhorst where ttm_mem_evict_first called with no_wait_gpu would incorrectly spin waiting for bo idle if trying to evict a busy buffer that also sits on the ddestroy list. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The ttm_mem_evict_first function could theoretically drop the lru lock without retrying if a reservation from off the LRU list ended up waiting. However, since currently there are no users that could cause a wait in that situation so this is not suitable for stable Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 09 10月, 2012 1 次提交
-
-
由 Konstantin Khlebnikov 提交于
A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA, currently it lost original meaning but still has some effects: | effect | alternative flags -+------------------------+--------------------------------------------- 1| account as reserved_vm | VM_IO 2| skip in core dump | VM_IO, VM_DONTDUMP 3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP 4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP This patch removes reserved_vm counter from mm_struct. Seems like nobody cares about it, it does not exported into userspace directly, it only reduces total_vm showed in proc. Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP. remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP. remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP. [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup] Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Carsten Otte <cotte@de.ibm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Paris <eparis@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Morris <james.l.morris@oracle.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Kentaro Takeda <takedakn@nttdata.co.jp> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Robert Richter <robert.richter@amd.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Venkatesh Pallipadi <venki@google.com> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-