- 20 2月, 2018 1 次提交
-
-
由 Tom St Denis 提交于
The dual ret/retval was more complex than need be. Now we drop the retval variable and assign the appropriate VM codes to ret instead. Signed-off-by: NTom St Denis <tom.stdenis@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 30 1月, 2018 1 次提交
-
-
由 Tom St Denis 提交于
The buf pointer was not being incremented inside the loop meaning the same block of data would be read or written repeatedly. (v2) Change 'buf' pointer to uint8_t* type Cc: stable@vger.kernel.org Fixes: 09ac4fcb ("drm/ttm: Implement vm_operations_struct.access v2") Signed-off-by: NTom St Denis <tom.stdenis@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 11 1月, 2018 1 次提交
-
-
由 Tan Xiaojun 提交于
No one will use this function except ttm_bo_io_mem_pfn() now, so move the calculation of ttm_bo_default_io_mem_pfn() into ttm_bo_io_mem_pfn() and do some cleanup. Signed-off-by: NTan Xiaojun <tanxiaojun@huawei.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 28 12月, 2017 2 次提交
-
-
由 Roger He 提交于
forward the operation context to ttm_tt_populate as well, and the ultimate goal is swapout enablement for reserved BOs. v2: squash in fix for vboxvideo Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NRoger He <Hongbo.He@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Tan Xiaojun 提交于
The io_mem_pfn field was added in commit ea642c32 ("drm/ttm: add io_mem_pfn callback") and is called unconditionally. However, not all drivers were updated to set it. Use the ttm_bo_default_io_mem_pfn function if a driver did not set its own. And add new function ttm_bo_io_mem_pfn() as wrapper. Signed-off-by: NMichal Srb <msrb@suse.com> Signed-off-by: NTan Xiaojun <tanxiaojun@huawei.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 26 7月, 2017 1 次提交
-
-
由 Felix Kuehling 提交于
Allows gdb to access contents of user mode mapped BOs. System memory is handled by TTM using kmap. Other memory pools require a new driver callback in ttm_bo_driver. v2: * kmap only one page at a time * swap in BO if needed * make driver callback more generic to handle private memory pools * document callback return value * WARN_ON -> WARN_ON_ONCE Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 18 7月, 2017 1 次提交
-
-
由 Tom Lendacky 提交于
Since video memory needs to be accessed decrypted, be sure that the memory encryption mask is not set for the video ranges. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NBorislav Petkov <bp@suse.de> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Larry Woodman <lwoodman@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Toshimitsu Kani <toshi.kani@hpe.com> Cc: kasan-dev@googlegroups.com Cc: kvm@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-efi@vger.kernel.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/a19436f30424402e01f63a09b32ab103272acced.1500319216.git.thomas.lendacky@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 16 5月, 2017 1 次提交
-
-
由 Masahiro Yamada 提交于
For the C file, include <drm/*.h> instead of relative path from include/drm. For headers in include/drm/ttm, simplify the <tty/*.h> with "*.h". This allows us to remove the -Iinclude/drm compiler flag from drivers/gpu/drm/ttm/Makefile (and from other drivers' Makefiles). Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1493009447-31524-3-git-send-email-yamada.masahiro@socionext.com
-
- 05 4月, 2017 1 次提交
-
-
由 Christian König 提交于
This allows the driver to handle io_mem mappings on their own. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NNicolai Hähnle <nicolai.haehnle@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 25 2月, 2017 1 次提交
-
-
由 Dave Jiang 提交于
->fault(), ->page_mkwrite(), and ->pfn_mkwrite() calls do not need to take a vma and vmf parameter when the vma already resides in vmf. Remove the vma parameter to simplify things. [arnd@arndb.de: fix ARM build] Link: http://lkml.kernel.org/r/20170125223558.1451224-1-arnd@arndb.de Link: http://lkml.kernel.org/r/148521301778.19116.10840599906674778980.stgit@djiang5-desk3.ch.intel.comSigned-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Jan Kara <jack@suse.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 2月, 2017 1 次提交
-
-
由 Nicolai Hähnle 提交于
The vm fault handler relies on the fact that the VMA owns a reference to the BO. However, once mmap_sem is released, other tasks are free to destroy the VMA, which can lead to the BO being freed. Fix two code paths where that can happen, both related to vm fault retries. Found via a lock debugging warning which flagged &bo->wu_mutex as locked while being destroyed. Fixes: cbe12e74 ("drm/ttm: Allow vm fault retries") Signed-off-by: NNicolai Hähnle <nicolai.haehnle@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 15 12月, 2016 1 次提交
-
-
由 Jan Kara 提交于
Every single user of vmf->virtual_address typed that entry to unsigned long before doing anything with it so the type of virtual_address does not really provide us any additional safety. Just use masked vmf->address which already has the appropriate type. Link: http://lkml.kernel.org/r/1479460644-25076-3-git-send-email-jack@suse.czSigned-off-by: NJan Kara <jack@suse.cz> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 10月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
I plan to usurp the short name of struct fence for a core kernel struct, and so I need to rename the specialised fence/timeline for DMA operations to make room. A consensus was reached in https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html that making clear this fence applies to DMA operations was a good thing. Since then the patch has grown a bit as usage increases, so hopefully it remains a good thing! (v2...: rebase, rerun spatch) v3: Compile on msm, spotted a manual fixup that I broke. v4: Try again for msm, sorry Daniel coccinelle script: @@ @@ - struct fence + struct dma_fence @@ @@ - struct fence_ops + struct dma_fence_ops @@ @@ - struct fence_cb + struct dma_fence_cb @@ @@ - struct fence_array + struct dma_fence_array @@ @@ - enum fence_flag_bits + enum dma_fence_flag_bits @@ @@ ( - fence_init + dma_fence_init | - fence_release + dma_fence_release | - fence_free + dma_fence_free | - fence_get + dma_fence_get | - fence_get_rcu + dma_fence_get_rcu | - fence_put + dma_fence_put | - fence_signal + dma_fence_signal | - fence_signal_locked + dma_fence_signal_locked | - fence_default_wait + dma_fence_default_wait | - fence_add_callback + dma_fence_add_callback | - fence_remove_callback + dma_fence_remove_callback | - fence_enable_sw_signaling + dma_fence_enable_sw_signaling | - fence_is_signaled_locked + dma_fence_is_signaled_locked | - fence_is_signaled + dma_fence_is_signaled | - fence_is_later + dma_fence_is_later | - fence_later + dma_fence_later | - fence_wait_timeout + dma_fence_wait_timeout | - fence_wait_any_timeout + dma_fence_wait_any_timeout | - fence_wait + dma_fence_wait | - fence_context_alloc + dma_fence_context_alloc | - fence_array_create + dma_fence_array_create | - to_fence_array + to_dma_fence_array | - fence_is_array + dma_fence_is_array | - trace_fence_emit + trace_dma_fence_emit | - FENCE_TRACE + DMA_FENCE_TRACE | - FENCE_WARN + DMA_FENCE_WARN | - FENCE_ERR + DMA_FENCE_ERR ) ( ... ) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NGustavo Padovan <gustavo.padovan@collabora.co.uk> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161025120045.28839-1-chris@chris-wilson.co.uk
-
- 08 7月, 2016 1 次提交
-
-
由 Christian König 提交于
Instead of using the flag just remember the fence of the last move operation. This avoids waiting for command submissions pipelined after the move, but before accessing the BO with the CPU again. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 05 5月, 2016 2 次提交
-
-
由 Christian König 提交于
Not used any more. Reviewed-by: NSinclair Yeh <syeh@vmware.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Not used any more. Reviewed-by: NSinclair Yeh <syeh@vmware.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 16 1月, 2016 1 次提交
-
-
由 Dan Williams 提交于
Convert the raw unsigned long 'pfn' argument to pfn_t for the purpose of evaluating the PFN_MAP and PFN_DEV flags. When both are set it triggers _PAGE_DEVMAP to be set in the resulting pte. There are no functional changes to the gpu drivers as a result of this conversion. Signed-off-by: NDan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave@sr71.net> Cc: David Airlie <airlied@linux.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 9月, 2014 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Today, most callers of ttm_io_prot() check TTM_PL_FLAG_CACHED before calling it since on some archs it will unconditionally create non-cached mappings. But not all callers do which is incorrect as far as I can tell. Instead, move that check inside ttm_io_port() itself for all archs and make powerpc use the same implementation as ia64 and arm Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 01 9月, 2014 1 次提交
-
-
由 Maarten Lankhorst 提交于
No users are left, kill it off! :D Conversion to the reservation api is next on the list, after that the functionality can be restored with rcu. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
- 12 3月, 2014 1 次提交
-
-
由 Thomas Hellstrom 提交于
A performance regression was introduced in TTM in linux 3.13 when we started using VM_PFNMAP for shared mappings. In theory this should've been faster due to less page book-keeping but it appears like VM_PFNMAP + x86 PAT + write-combine is a particularly cpu-hungry combination, as seen by largely increased cpu-usage on r200 GL video playback. Until we've sorted out why, revert to always use VM_MIXEDMAP. Reference: freedesktop.org bugzilla bug #75719 Reported-and-tested-by: <smoki00790@gmail.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Cc: stable@vger.kernel.org
-
- 08 1月, 2014 3 次提交
-
-
由 Thomas Hellstrom 提交于
Needed for some vm operations; most notably unmap_mapping_range() with even_cows = 0. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBrian Paul <brianp@vmware.com>
-
由 Thomas Hellstrom 提交于
This is illegal for at least two reasons: 1) While it may work on some platforms / iommus, obtaining page pointers from mapped sg-lists is illegal, since the DMA API allows page pointer information to be destroyed in the sg mapping process. 2) TTM has no way of determining the linear kernel map caching state of the underlying pages. PTEs with conflicting caching state pointing to the same pfn is not allowed. TTM operations touching pages of imported sg-tables should be redirected through the proper dma-buf operations. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Acked-by: NDaniel Vetter <daniel@ffwll.ch>
-
由 Thomas Hellstrom 提交于
VM_PFNMAP is faster than VM_MIXEDMAP due to reduced page administration so use it for shared maps where we don't have any Copy-On-Write pages. For private maps, we continue to use VM_MIXEDMAP. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com>
-
- 17 12月, 2013 1 次提交
-
-
由 Thomas Hellstrom 提交于
VMAs covering a bo but that didn't start at the same address space offset as the bo they were mapping were incorrectly generating SEGFAULT errors in the fault handler. Reported-by: NJoseph Dolinak <kanilo2@yahoo.com> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com> Cc: stable@vger.kernel.org
-
- 20 11月, 2013 1 次提交
-
-
由 Thomas Hellstrom 提交于
Addresses "[BUG] completely bonkers use of set_need_resched + VM_FAULT_NOPAGE". In the first occurence it was used to try to be nice while releasing the mmap_sem and retrying the fault to work around a locking inversion. The second occurence was never used. There has been some discussion whether we should change the locking order to mmap_sem -> bo_reserve. This patch doesn't address that issue, and leaves that locking order undefined. The solution that we release the mmap_sem if tryreserve fails and wait for the buffer to become unreserved is something we want in any case, and follows how the core vm system waits for pages to be come unlocked while releasing the mmap_sem. The code also outlines what needs to be changed if we want to establish the locking order as mmap_sem -> bo::reserve. One slight issue that remains with this code is that the fault handler might be prone to starvation if another thread countinously reserves the buffer. IMO that usage pattern is highly unlikely. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
-
- 13 11月, 2013 1 次提交
-
-
由 Thomas Hellstrom 提交于
Fix a long-standing TTM issue where we manipulated the vma page_prot bits while mmap_sem was taken in read mode only. We now make a local copy of the vma structure which we pass when we set the ptes. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com>
-
- 06 11月, 2013 1 次提交
-
-
由 Thomas Hellstrom 提交于
Make use of the FAULT_FLAG_ALLOW_RETRY flag to allow dropping the mmap_sem while waiting for bo idle. FAULT_FLAG_ALLOW_RETRY appears to be primarily designed for disk waits but should work just as fine for GPU waits.. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com>
-
- 19 8月, 2013 1 次提交
-
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 25 7月, 2013 1 次提交
-
-
由 David Herrmann 提交于
Use the new vma-manager infrastructure. This doesn't change any implementation details as the vma-offset-manager is nearly copied 1-to-1 from TTM. The vm_lock is moved into the offset manager so we can drop it from TTM. During lookup, we use the vma locking helpers to take a reference to the found object. In all other scenarios, locking stays the same as before. We always guarantee that drm_vma_offset_remove() is called only during destruction. Hence, helpers like drm_vma_node_offset_addr() are always safe as long as the node has a valid offset. This also drops the addr_space_offset member as it is a copy of vm_start in vma_node objects. Use the accessor functions instead. v4: - remove vm_lock - use drm_vma_offset_lock_lookup() to protect lookup (instead of vm_lock) Cc: Dave Airlie <airlied@redhat.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Martin Peres <martin.peres@labri.fr> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@gmail.com>
-
- 16 4月, 2013 1 次提交
-
-
由 Libin 提交于
(*->vm_end - *->vm_start) >> PAGE_SHIFT operation is implemented as a inline funcion vma_pages() in linux/mm.h, so using it. Signed-off-by: NLibin <huawei.libin@huawei.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 28 11月, 2012 1 次提交
-
-
由 Thomas Hellstrom 提交于
Removes the need for a write lock each time we call ttm_bo_unref(). v2: Remove an unused variable. v3: Really remove the unused variable. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 09 10月, 2012 1 次提交
-
-
由 Konstantin Khlebnikov 提交于
A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA, currently it lost original meaning but still has some effects: | effect | alternative flags -+------------------------+--------------------------------------------- 1| account as reserved_vm | VM_IO 2| skip in core dump | VM_IO, VM_DONTDUMP 3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP 4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP This patch removes reserved_vm counter from mm_struct. Seems like nobody cares about it, it does not exported into userspace directly, it only reduces total_vm showed in proc. Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP. remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP. remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP. [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup] Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Carsten Otte <cotte@de.ibm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Paris <eparis@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Morris <james.l.morris@oracle.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Kentaro Takeda <takedakn@nttdata.co.jp> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Robert Richter <robert.richter@amd.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Venkatesh Pallipadi <venki@google.com> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 3月, 2012 1 次提交
-
-
由 Joe Perches 提交于
Use the more current logging style. Add pr_fmt and remove the TTM_PFX uses. Coalesce formats and align arguments. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 06 12月, 2011 1 次提交
-
-
由 Jerome Glisse 提交于
Move the page allocation and freeing to driver callback and provide ttm code helper function for those. Most intrusive change, is the fact that we now only fully populate an object this simplify some of code designed around the page fault design. V2 Rebase on top of memory accounting overhaul V3 New rebase on top of more memory accouting changes V4 Rebase on top of no memory account changes (where/when is my delorean when i need it ?) Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
- 28 10月, 2011 1 次提交
-
-
由 Dave Airlie 提交于
This reverts commit dfadbbdb. Further upstream discussion between Marek and Thomas decided this wasn't fully baked and needed further work, so revert it before it hits mainline. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 01 9月, 2011 1 次提交
-
-
由 Marek Olšák 提交于
Sometimes we want to know whether a buffer is busy and wait for it (bo_wait). However, sometimes it would be more useful to be able to query whether a buffer is busy and being either read or written, and wait until it's stopped being either read or written. The point of this is to be able to avoid unnecessary waiting, e.g. if a GPU has written something to a buffer and is now reading that buffer, and a CPU wants to map that buffer for read, it needs to only wait for the last write. If there were no write, there wouldn't be any waiting needed. This, or course, requires user space drivers to send read/write flags with each relocation (like we have read/write domains in radeon, so we can actually use those for something useful now). Now how this patch works: The read/write flags should passed to ttm_validate_buffer. TTM maintains separate sync objects of the last read and write for each buffer, in addition to the sync object of the last use of a buffer. ttm_bo_wait then operates with one the sync objects. Signed-off-by: NMarek Olšák <maraeo@gmail.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 22 11月, 2010 2 次提交
-
-
由 Thomas Hellstrom 提交于
This patch attempts to fix up shortcomings with the current calling sequences. 1) There's a fastpath where no locking occurs and only io_mem_reserved is called to obtain needed info for mapping. The fastpath is set per memory type manager. 2) If the fastpath is disabled, io_mem_reserve and io_mem_free will be exactly balanced and not called recursively for the same struct ttm_mem_reg. 3) Optionally the driver can choose to enable a per memory type manager LRU eviction mechanism that, when io_mem_reserve returns -EAGAIN will attempt to kill user-space mappings of memory in that manager to free up needed resources Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBen Skeggs <bskeggs@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The bo lock used only to protect the bo sync object members, and since it is a per bo lock, fencing a buffer list will see a lot of locks and unlocks. Replace it with a per-device lock that protects the sync object members on *all* bos. Reading and setting these members will always be very quick, so the risc of heavy lock contention is microscopic. Note that waiting for sync objects will always take place outside of this lock. The bo device fence lock will eventually be replaced with a seqlock / rcu mechanism so we can determine that a bo is idle under a rcu / read seqlock. However this change will allow us to batch fencing and unreserving of buffers with a minimal amount of locking. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 20 4月, 2010 1 次提交
-
-
由 Jerome Glisse 提交于
On fault the driver is given the opportunity to perform any operation it sees fit in order to place the buffer into a CPU visible area of memory. This patch doesn't break TTM users, nouveau, vmwgfx and radeon should keep working properly. Future patch will take advantage of this infrastructure and remove the old path from TTM once driver are converted. V2 return VM_FAULT_NOPAGE if callback return -EBUSY or -ERESTARTSYS V3 balance io_mem_reserve and io_mem_free call, fault_reserve_notify is responsible to perform any necessary task for mapping to succeed V4 minor cleanup, atomic_t -> bool as member is protected by reserve mecanism from concurent access V5 the callback is now responsible for iomapping the bo and providing a virtual address this simplify TTM and will allow to get rid of TTM_MEMTYPE_FLAG_NEEDS_IOREMAP V6 use the bus addr data to decide to ioremap or this isn't needed but we don't necesarily need to ioremap in the callback but still allow driver to use static mapping Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 16 12月, 2009 1 次提交
-
-
由 Dave Airlie 提交于
This path isn't used by radeon yet, but future drivers will want it, so fix it right. Reported-by: NLuca Barbieri <luca@luca-barbieri.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-