- 05 3月, 2015 1 次提交
-
-
由 Alex Deucher 提交于
We need to store device offsets in 64 bit as the device address space may be larger than the CPU's. Fixes GPU init failures on radeons with 4GB or more of vram on 32 bit kernels. We put vram at the start of the GPU's address space so the gart aperture starts at 4 GB causing all GPU addresses in the gart aperture to get truncated. bug: https://bugs.freedesktop.org/show_bug.cgi?id=89072 [airlied: fix warning on nouveau build] Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: thellstrom@vmware.com Acked-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 04 12月, 2014 1 次提交
-
-
由 Christian König 提交于
This patch adds an optional list_head parameter to ttm_eu_reserve_buffers. If specified duplicates in the execbuf list are no longer reported as errors, but moved to this list instead. Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 30 9月, 2014 1 次提交
-
-
由 Maarten Lankhorst 提交于
This allows importing reservation objects from dma-bufs. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
- 11 9月, 2014 1 次提交
-
-
由 Christian König 提交于
This patch adds a new flag to the ttm_validate_buffer list to add the fence as shared to the reservation object. Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 02 9月, 2014 1 次提交
-
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
- 01 9月, 2014 3 次提交
-
-
由 Maarten Lankhorst 提交于
This reorders the list to keep track of what buffers are reserved, so previous members are always unreserved. This gets rid of some bookkeeping that's no longer needed, while simplifying the code some. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
It seems some drivers really want this as a parameter, like vmwgfx. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
No users are left, kill it off! :D Conversion to the reservation api is next on the list, after that the functionality can be restored with rcu. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
- 27 8月, 2014 1 次提交
-
-
由 Christian König 提交于
This allows us to more fine grained specify where to place the buffer object. v2: rebased on drm-next, add bochs changes as well Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 09 8月, 2014 1 次提交
-
-
由 Alexandre Courbot 提交于
Pages allocated using the DMA API have a coherent memory mapping. Make this mapping visible to drivers so they can decide to use it instead of creating their own redundant one. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Acked-by: NDavid Airlie <airlied@linux.ie> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
- 08 7月, 2014 2 次提交
-
-
由 Christian König 提交于
bo->mem.placement is not initialized when ttm_bo_man_get_node is called, so the flag had no effect at all. v2: change nouveau and vmwgfx as well Signed-off-by: NChristian König <christian.koenig@amd.com> Cc: stable@vger.kernel.org Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Alexandre Courbot 提交于
ttm_tt_cache_flush's implementation was removed in 2009 by commit c9c97b8c, but its declaration has been hiding in ttm_bo_driver.h since then. It has been surviving in the dark for too long now ; give it the mercy blow. Reviewed-by: NThierry Reding <treding@nvidia.com> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
-
- 26 5月, 2014 1 次提交
-
-
由 Alexandre Courbot 提交于
The kerneldoc header of ttm_bo_create() was referring to another (nonexisting) function and had a few obsolete or incorrect arguments. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 04 4月, 2014 2 次提交
-
-
由 Thomas Hellstrom 提交于
Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBrian Paul <brianp@vmware.com>
-
由 Lauri Kasanen 提交于
Clients like i915 need to segregate cache domains within the GTT which can lead to small amounts of fragmentation. By allocating the uncached buffers from the bottom and the cacheable buffers from the top, we can reduce the amount of wasted space and also optimize allocation of the mappable portion of the GTT to only those buffers that require CPU access through the GTT. For other drivers, allocating small bos from one end and large ones from the other helps improve the quality of fragmentation. Based on drm_mm work by Chris Wilson. v3: Changed to use a TTM placement flag v2: Updated kerneldoc Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ben Widawsky <ben@bwidawsk.net> Cc: Christian König <deathsimple@vodafone.de> Signed-off-by: NLauri Kasanen <cand@gmx.com> Signed-off-by: NDavid Airlie <airlied@redhat.com>
-
- 28 3月, 2014 1 次提交
-
-
由 Thomas Hellstrom 提交于
A function to be used to check whether a caller has put a ref object (opened) a struct ttm_base_object Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBrian Paul <brianp@vmware.com>
-
- 16 3月, 2014 1 次提交
-
-
由 David Herrmann 提交于
With dev->anon_inode we have a global address_space ready for operation right from the beginning. Therefore, there is no need to do a delayed setup with TTM. Instead, set dev_mapping during initialization in ttm_bo_device_init() and remove any "if (dev_mapping)" conditions. Cc: Dave Airlie <airlied@redhat.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Alex Deucher <alexdeucher@gmail.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
-
- 18 2月, 2014 1 次提交
-
-
由 Alexandre Courbot 提交于
Declare 'struct device' explicitly in ttm_page_alloc.h as this file does not include any file declaring it. This removes the following warning: warning: 'struct device' declared inside parameter list Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Reviewed-by: NThierry Reding <treding@nvidia.com>
-
- 08 1月, 2014 2 次提交
-
-
由 Thomas Hellstrom 提交于
When a client looks up a ttm object, don't look it up through the device hash table, but rather from the file hash table. That makes sure that the client has indeed put a reference on the object, or in gem terms, has opened the object; either using prime or using the global "name". To avoid a performance loss, make sure the file hash table entries can be looked up from under an RCU lock, and as a consequence, replace the rwlock with a spinlock, since we never need to take it in read mode only anymore. Finally add a ttm object lookup function for the device hash table, that is intended to be used when we put a ref object on a base object or, in gem terms, when we open the object. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBrian Paul <brianp@vmware.com>
-
由 Thomas Hellstrom 提交于
Needed for some vm operations; most notably unmap_mapping_range() with even_cows = 0. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBrian Paul <brianp@vmware.com>
-
- 20 11月, 2013 1 次提交
-
-
由 Thomas Hellstrom 提交于
Addresses "[BUG] completely bonkers use of set_need_resched + VM_FAULT_NOPAGE". In the first occurence it was used to try to be nice while releasing the mmap_sem and retrying the fault to work around a locking inversion. The second occurence was never used. There has been some discussion whether we should change the locking order to mmap_sem -> bo_reserve. This patch doesn't address that issue, and leaves that locking order undefined. The solution that we release the mmap_sem if tryreserve fails and wait for the buffer to become unreserved is something we want in any case, and follows how the core vm system waits for pages to be come unlocked while releasing the mmap_sem. The code also outlines what needs to be changed if we want to establish the locking order as mmap_sem -> bo::reserve. One slight issue that remains with this code is that the fault handler might be prone to starvation if another thread countinously reserves the buffer. IMO that usage pattern is highly unlikely. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
-
- 18 11月, 2013 2 次提交
-
-
由 Thomas Hellstrom 提交于
Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com>
-
由 Thomas Hellstrom 提交于
If no reservation ticket is given to the execbuf reservation utilities, try reservation with non-blocking semantics. This is intended for eviction paths that use the execbuf reservation utilities for convenience rather than for deadlock avoidance. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com>
-
- 06 11月, 2013 1 次提交
-
-
由 Thomas Hellstrom 提交于
Used by the vmwgfx driver Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 25 7月, 2013 1 次提交
-
-
由 David Herrmann 提交于
Use the new vma-manager infrastructure. This doesn't change any implementation details as the vma-offset-manager is nearly copied 1-to-1 from TTM. The vm_lock is moved into the offset manager so we can drop it from TTM. During lookup, we use the vma locking helpers to take a reference to the found object. In all other scenarios, locking stays the same as before. We always guarantee that drm_vma_offset_remove() is called only during destruction. Hence, helpers like drm_vma_node_offset_addr() are always safe as long as the node has a valid offset. This also drops the addr_space_offset member as it is a copy of vm_start in vma_node objects. Use the accessor functions instead. v4: - remove vm_lock - use drm_vma_offset_lock_lookup() to protect lookup (instead of vm_lock) Cc: Dave Airlie <airlied@redhat.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Martin Peres <martin.peres@labri.fr> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@gmail.com>
-
- 28 6月, 2013 4 次提交
-
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Makes lockdep a lot more useful. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Now that the code is compatible in semantics, flip the switch. Use ww_mutex instead of the homegrown implementation. ww_mutex uses -EDEADLK to signal that the caller has to back off, and -EALREADY to indicate this buffer is already held by the caller. ttm used -EAGAIN and -EDEADLK for those, respectively. So some changes were needed to handle this correctly. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
This commit converts the source of the val_seq counter to the ww_mutex api. The reservation objects are converted later, because there is still a lockdep splat in nouveau that has to resolved first. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 12 4月, 2013 1 次提交
-
-
由 Dave Airlie 提交于
qxl wants to use io mapping like i915 gem does, for now just export the symbols so the driver can implement atomic page maps using io mapping. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 15 1月, 2013 3 次提交
-
-
由 Maarten Lankhorst 提交于
All legitimate users of this function outside ttm_bo.c are gone, now it's only an implementation detail. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
由 Maarten Lankhorst 提交于
Instead of dropping everything, waiting for the bo to be unreserved and trying over, a better strategy would be to do a blocking wait. This can be mapped a lot better to a mutex_lock-like call. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
由 Maarten Lankhorst 提交于
There should no longer be assumptions that reserve will always succeed with the lru lock held, so we can safely break the whole atomic reserve/lru thing. As a bonus this fixes most lockdep annotations for reservations. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
- 10 12月, 2012 1 次提交
-
-
由 Maarten Lankhorst 提交于
All items on the lru list are always reservable, so this is a stupid thing to keep. Not only that, it is used in a way which would guarantee deadlocks if it were ever to be set to block on reserve. This is a lot of churn, but mostly because of the removal of the argument which can be nested arbitrarily deeply in many places. No change of code in this patch except removal of the no_wait_reserve argument, the previous patch removed the use of no_wait_reserve. v2: - Warn if -EBUSY is returned on reservation, all objects on the list should be reservable. Adjusted patch slightly due to conflicts. v3: - Focus on no_wait_reserve removal only. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 20 11月, 2012 6 次提交
-
-
由 Maarten Lankhorst 提交于
This is similar to other platforms that don't allow command submission to buffers locked on the cpu. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The mostly used lookup+get put+potential_destroy path of TTM objects is converted to use RCU locks. This will substantially decrease the amount of locked bus cycles during normal operation. Since we use kfree_rcu to free the objects, no rcu synchronization is needed at module unload time. v2: Don't touch include/linux/kref.h v3: Adapt to kref_get_unless_zero return value change Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-By: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-By: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
vmwgfx was its only user and always sets it to the same.. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-By: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Marcin Slusarz 提交于
It's unused. Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-