- 02 11月, 2017 12 次提交
-
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
This is already handled in the top-level gem_new() ioctl in another manner, but this will be removed in a future commit. Ideally we'd not need to check up-front at all, and let the VMM code handle error checking, but there are paths in the current BO management code where this isn't possible due to map() not always being called during BO creation, and map() calls not being allowed to fail during buffer migration. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
If the VMA is being deleted, we don't need to explicity unmap first anymore. The MMU code will automatically merge the operations into a single page tree walk. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
The conditional is the same for every mapping. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
We don't really care about where the memory is, just that it's compatible with a VMA allocated for a given page size. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
It's far more convenient to deal with like this. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Upcoming changes will remove the nvkm_vmm pointer from nvkm_vma, instead requiring it to be explicitly specified on each operation. It's not currently possible to get this information for BAR1 mappings, so let's fix that ahead of time. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
- 05 4月, 2017 1 次提交
-
-
由 Christian König 提交于
This allows the driver to handle io_mem mappings on their own. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NNicolai Hähnle <nicolai.haehnle@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 17 2月, 2017 5 次提交
-
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
We never have any need for a double-linked list here, and as there's generally a large number of these objects, replace it with a single- linked list in order to save some memory. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
- 28 1月, 2017 2 次提交
-
-
由 Christian König 提交于
The additional housekeeping had too much CPU overhead, let's use the BO priorities instead. agd: also revert hibmc changes Reviewed-by: NSinclair Yeh <syeh@vmware.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-and-Tested-by: NRoger.He <Hongbo.He@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Nicolai Hähnle 提交于
Ensure that the driver can listen to evictions even when they don't take the path through ttm_bo_driver::move. This is crucial for amdgpu, which relies on an eviction counter to skip re-binding page tables when possible. Signed-off-by: NNicolai Hähnle <nicolai.haehnle@amd.com> Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 13 12月, 2016 1 次提交
-
-
由 Ben Skeggs 提交于
TTM was changed a while back to allow for pipelining of buffer moves, and part of this was the removal of waiting for a BO to idle before calling move(), placing the responsibility on the driver to do this if required. That's all well and good, except, we make use of move_notify() to handle mapping/unmapping from the GPU VMM as move() isn't called on all paths. This commit adds a wait before unmapping from a VMM in move_notify(), to prevent GPU page faults where a buffer is still being accessed. Signed-off-by: NBen Skeggs <bskeggs@redhat.com> Cc: stable@vger.kernel.org [v4.8+]
-
- 26 10月, 2016 1 次提交
-
-
由 Christian König 提交于
This way the driver can decide if it is valuable to evict a BO or not. The current implementation is added as default to all existing drivers. v2: fix some typos found during internal testing Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 25 10月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
I plan to usurp the short name of struct fence for a core kernel struct, and so I need to rename the specialised fence/timeline for DMA operations to make room. A consensus was reached in https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html that making clear this fence applies to DMA operations was a good thing. Since then the patch has grown a bit as usage increases, so hopefully it remains a good thing! (v2...: rebase, rerun spatch) v3: Compile on msm, spotted a manual fixup that I broke. v4: Try again for msm, sorry Daniel coccinelle script: @@ @@ - struct fence + struct dma_fence @@ @@ - struct fence_ops + struct dma_fence_ops @@ @@ - struct fence_cb + struct dma_fence_cb @@ @@ - struct fence_array + struct dma_fence_array @@ @@ - enum fence_flag_bits + enum dma_fence_flag_bits @@ @@ ( - fence_init + dma_fence_init | - fence_release + dma_fence_release | - fence_free + dma_fence_free | - fence_get + dma_fence_get | - fence_get_rcu + dma_fence_get_rcu | - fence_put + dma_fence_put | - fence_signal + dma_fence_signal | - fence_signal_locked + dma_fence_signal_locked | - fence_default_wait + dma_fence_default_wait | - fence_add_callback + dma_fence_add_callback | - fence_remove_callback + dma_fence_remove_callback | - fence_enable_sw_signaling + dma_fence_enable_sw_signaling | - fence_is_signaled_locked + dma_fence_is_signaled_locked | - fence_is_signaled + dma_fence_is_signaled | - fence_is_later + dma_fence_is_later | - fence_later + dma_fence_later | - fence_wait_timeout + dma_fence_wait_timeout | - fence_wait_any_timeout + dma_fence_wait_any_timeout | - fence_wait + dma_fence_wait | - fence_context_alloc + dma_fence_context_alloc | - fence_array_create + dma_fence_array_create | - to_fence_array + to_dma_fence_array | - fence_is_array + dma_fence_is_array | - trace_fence_emit + trace_dma_fence_emit | - FENCE_TRACE + DMA_FENCE_TRACE | - FENCE_WARN + DMA_FENCE_WARN | - FENCE_ERR + DMA_FENCE_ERR ) ( ... ) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NGustavo Padovan <gustavo.padovan@collabora.co.uk> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161025120045.28839-1-chris@chris-wilson.co.uk
-
- 22 9月, 2016 1 次提交
-
-
由 Karol Herbst 提交于
This reverts commit aff51175. The commit caused fence timeouts within nvc0_screen_destroy and most likely other places as well. The most obvious effect is, that userspace processes take minutes to actually quit. Signed-off-by: NKarol Herbst <karolherbst@gmail.com> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
- 19 9月, 2016 1 次提交
-
-
由 David Herrmann 提交于
Rather than using "struct file*", use "struct drm_file*" as tag VM tag for BOs. This will pave the way for "struct drm_file*" without any "struct file*" back-pointer. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20160901124837.680-3-dh.herrmann@gmail.com
-
- 08 8月, 2016 3 次提交
-
-
由 Michel Dänzer 提交于
Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Michel Dänzer 提交于
Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Michel Dänzer 提交于
Fixes hangs under memory pressure, e.g. running the piglit test tex3d-maxsize concurrently with other tests. Fixes: 17d33bc9 ("drm/ttm: drop waiting for idle in ttm_bo_evict.") Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 06 8月, 2016 1 次提交
-
-
由 Michel Dänzer 提交于
Fixes hangs under memory pressure, e.g. running the piglit test tex3d-maxsize concurrently with other tests. Fixes: 17d33bc9 ("drm/ttm: drop waiting for idle in ttm_bo_evict.") Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 14 7月, 2016 4 次提交
-
-
由 Alexandre Courbot 提交于
This flag's only remaining function is to ignore the uncached flag for BOs on coherent architectures. However the reason for allocating an object uncache on a non-coherent architecture (namely because the cost of doing explicit flushes/ invalidations is higher than the benefit of caching the data because accesses are few and far between) should also apply on architectures for which coherency is maintained implicitly. Thus allocate coherent objects as uncached on all architectures. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Alexandre Courbot 提交于
TTM-allocated coherent objects were populated using the DMA API and accessed using the mapping it returned to workaround coherency issues. These issues seem to have been solved, thus remove this extra case to handle and use the regular kernel mapping functions. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
- 08 7月, 2016 3 次提交
-
-
由 Christian König 提交于
It isn't used and not waiting for the GPU after scheduling a move is actually quite dangerous. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
When we want to pipeline accelerated moves we need to wait in the fallback path. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Wait for idle before moving the BO in all drivers implementing an accelerated move function. This should keep the current behavior when removing the pre move wait. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 5月, 2016 1 次提交
-
-
由 Ben Skeggs 提交于
Fixes out-of-tree build issue where uapi/drm/nouveau_drm.h gets picked up instead. Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
- 05 5月, 2016 3 次提交
-
-
由 Christian König 提交于
This allows fine grained control for the driver where to add a BO into the LRU. v2: fix typo in comment Reviewed-by: NSinclair Yeh <syeh@vmware.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Not used any more. Reviewed-by: NSinclair Yeh <syeh@vmware.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Not used any more. Reviewed-by: NSinclair Yeh <syeh@vmware.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-