- 30 7月, 2020 1 次提交
-
-
由 Felix Kuehling 提交于
VMAs with a pg_offs that's offset from the start of the vma_node need to adjust the offset within the BO accordingly. This matches the offset calculation in ttm_bo_vm_fault_reserved. Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Tested-by: NLaurent Morichetti <laurent.morichetti@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/381169/
-
- 15 6月, 2020 1 次提交
-
-
由 Xiyu Yang 提交于
ttm_bo_vm_fault_reserved() invokes dma_fence_get(), which returns a reference of the specified dma_fence object to "moving" with increased refcnt. When ttm_bo_vm_fault_reserved() returns, local variable "moving" becomes invalid, so the refcount should be decreased to keep refcount balanced. The reference counting issue happens in several exception handling paths of ttm_bo_vm_fault_reserved(). When those error scenarios occur such as "err" equals to -EBUSY, the function forgets to decrease the refcnt increased by dma_fence_get(), causing a refcnt leak. Fix this issue by calling dma_fence_put() when no_wait_gpu flag is equals to true. Signed-off-by: NXiyu Yang <xiyuyang19@fudan.edu.cn> Signed-off-by: NXin Tan <tanxin.ctf@gmail.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/370219/Signed-off-by: NChristian König <christian.koenig@amd.com>
-
- 10 6月, 2020 2 次提交
-
-
由 Michel Lespinasse 提交于
Convert comments that reference mmap_sem to reference mmap_lock instead. [akpm@linux-foundation.org: fix up linux-next leftovers] [akpm@linux-foundation.org: s/lockaphore/lock/, per Vlastimil] [akpm@linux-foundation.org: more linux-next fixups, per Michel] Signed-off-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-13-walken@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
This change converts the existing mmap_sem rwsem calls to use the new mmap locking API instead. The change is generated using coccinelle with the following rule: // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir . @@ expression mm; @@ ( -init_rwsem +mmap_init_lock | -down_write +mmap_write_lock | -down_write_killable +mmap_write_lock_killable | -down_write_trylock +mmap_write_trylock | -up_write +mmap_write_unlock | -downgrade_write +mmap_write_downgrade | -down_read +mmap_read_lock | -down_read_killable +mmap_read_lock_killable | -down_read_trylock +mmap_read_trylock | -up_read +mmap_read_unlock ) -(&mm->mmap_sem) +(mm) Signed-off-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Reviewed-by: NVlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 4月, 2020 1 次提交
-
-
With amdgpu and CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y, there are errors like: BUG: non-zero pgtables_bytes on freeing mm and: BUG: Bad rss-counter state with TTM transparent huge-pages. Until we've figured out what other TTM drivers do differently compared to vmwgfx, disable the huge_fault() callback, eliminating transhuge page-table entries. Cc: Christian König <christian.koenig@amd.com> Signed-off-by: NThomas Hellstrom (VMware) <thomas_os@shipmail.org> Reported-by: NAlex Xu (Hello71) <alex_y_xu@yahoo.ca> Tested-by: NAlex Xu (Hello71) <alex_y_xu@yahoo.ca> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200409164925.11912-1-thomas_os@shipmail.org
-
- 03 4月, 2020 1 次提交
-
-
由 Peter Xu 提交于
The idea comes from a discussion between Linus and Andrea [1]. Before this patch we only allow a page fault to retry once. We achieved this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing handle_mm_fault() the second time. This was majorly used to avoid unexpected starvation of the system by looping over forever to handle the page fault on a single page. However that should hardly happen, and after all for each code path to return a VM_FAULT_RETRY we'll first wait for a condition (during which time we should possibly yield the cpu) to happen before VM_FAULT_RETRY is really returned. This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY flag when we receive VM_FAULT_RETRY. It means that the page fault handler now can retry the page fault for multiple times if necessary without the need to generate another page fault event. Meanwhile we still keep the FAULT_FLAG_TRIED flag so page fault handler can still identify whether a page fault is the first attempt or not. Then we'll have these combinations of fault flags (only considering ALLOW_RETRY flag and TRIED flag): - ALLOW_RETRY and !TRIED: this means the page fault allows to retry, and this is the first try - ALLOW_RETRY and TRIED: this means the page fault allows to retry, and this is not the first try - !ALLOW_RETRY and !TRIED: this means the page fault does not allow to retry at all - !ALLOW_RETRY and TRIED: this is forbidden and should never be used In existing code we have multiple places that has taken special care of the first condition above by checking against (fault_flags & FAULT_FLAG_ALLOW_RETRY). This patch introduces a simple helper to detect the first retry of a page fault by checking against both (fault_flags & FAULT_FLAG_ALLOW_RETRY) and !(fault_flag & FAULT_FLAG_TRIED) because now even the 2nd try will have the ALLOW_RETRY set, then use that helper in all existing special paths. One example is in __lock_page_or_retry(), now we'll drop the mmap_sem only in the first attempt of page fault and we'll keep it in follow up retries, so old locking behavior will be retained. This will be a nice enhancement for current code [2] at the same time a supporting material for the future userfaultfd-writeprotect work, since in that work there will always be an explicit userfault writeprotect retry for protected pages, and if that cannot resolve the page fault (e.g., when userfaultfd-writeprotect is used in conjunction with swapped pages) then we'll possibly need a 3rd retry of the page fault. It might also benefit other potential users who will have similar requirement like userfault write-protection. GUP code is not touched yet and will be covered in follow up patch. Please read the thread below for more information. [1] https://lore.kernel.org/lkml/20171102193644.GB22686@redhat.com/ [2] https://lore.kernel.org/lkml/20181230154648.GB9832@redhat.com/Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Suggested-by: NAndrea Arcangeli <aarcange@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NBrian Geffon <bgeffon@google.com> Cc: Bobby Powers <bobbypowers@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Denis Plotnikov <dplotnikov@virtuozzo.com> Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Martin Cracauer <cracauer@cons.org> Cc: Marty McFadden <mcfadden8@llnl.gov> Cc: Matthew Wilcox <willy@infradead.org> Cc: Maya Gokhale <gokhale2@llnl.gov> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Link: http://lkml.kernel.org/r/20200220160246.9790-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 3月, 2020 1 次提交
-
-
Support huge (PMD-size and PUD-size) page-table entries by providing a huge_fault() callback. We still support private mappings and write-notify by splitting the huge page-table entries on write-access. Note that for huge page-faults to occur, either the kernel needs to be compiled with trans-huge-pages always enabled, or the kernel needs to be compiled with trans-huge-pages enabled using madvise, and the user-space app needs to call madvise() to enable trans-huge pages on a per-mapping basis. Furthermore huge page-faults will not succeed unless buffer objects and user-space addresses are aligned on huge page size boundaries. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NThomas Hellstrom (VMware) <thomas_os@shipmail.org> Reviewed-by: NRoland Scheidegger <sroland@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
- 16 1月, 2020 1 次提交
-
-
由 Thomas Hellstrom 提交于
TTM graphics buffer objects may, transparently to user-space, move between IO and system memory. When that happens, all PTEs pointing to the old location are zapped before the move and then faulted in again if needed. When that happens, the page protection caching mode- and encryption bits may change and be different from those of struct vm_area_struct::vm_page_prot. We were using an ugly hack to set the page protection correctly. Fix that and instead export and use vmf_insert_mixed_prot() or use vmf_insert_pfn_prot(). Also get the default page protection from struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot(). This way we catch modifications done by the vm system for drivers that want write-notification. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: "Christian König" <christian.koenig@amd.com> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 06 12月, 2019 1 次提交
-
-
由 Gerd Hoffmann 提交于
The fake offset is going to stay, so change the calling convention for drm_gem_object_funcs.mmap to include the fake offset. Update all users accordingly. Note that this reverts 83b8a6f2 ("drm/gem: Fix mmap fake offset handling for drm_gem_object_funcs.mmap") and on top then adds the fake offset to drm_gem_prime_mmap to make sure all paths leading to obj->funcs->mmap are consistent. v3: move fake-offset tweak in drm_gem_prime_mmap() so we have this code only once in the function (Rob Herring). Fixes: 83b8a6f2 ("drm/gem: Fix mmap fake offset handling for drm_gem_object_funcs.mmap") Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Rob Herring <robh@kernel.org> Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20191127092523.5620-2-kraxel@redhat.com
-
- 08 11月, 2019 1 次提交
-
-
由 Christian König 提交于
That is needed by at least a cleanup in radeon. v2: also export ttm_bo_vm_access Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NHuang Rui <ray.huang@amd.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Link: https://patchwork.freedesktop.org/patch/339353/
-
- 06 11月, 2019 3 次提交
-
-
由 Thomas Hellstrom 提交于
The default TTM fault handler may not be completely sufficient (vmwgfx needs to do some bookkeeping, control the write protectionand also needs to restrict the number of prefaults). Also make it possible replicate ttm_bo_vm_reserve() functionality for, for example, mkwrite handlers. So turn the TTM vm code into helpers: ttm_bo_vm_fault_reserved(), ttm_bo_vm_open(), ttm_bo_vm_close() and ttm_bo_vm_reserve(). Also provide a default TTM fault handler for other drivers to use. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
由 Thomas Hellstrom 提交于
The explicit typcasts are meaningless, so remove them. Suggested-by: NMatthew Wilcox <willy@infradead.org> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
由 Daniel Vetter 提交于
With nouveau fixed all ttm-using drives have the correct nesting of mmap_sem vs dma_resv, and we can just lock the buffer. Assuming I didn't screw up anything with my audit of course. v2: - Dont forget wu_mutex (Christian König) - Keep the mmap_sem-less wait optimization (Thomas) - Use _lock_interruptible to be good citizens (Thomas) v3: Rebase over fault handler helperification. Reviewed-by: Christian König <christian.koenig@amd.com> (v2) Reviewed-by: Thomas Hellström <thellstrom@vmware.com> (v2) Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Christian Koenig <christian.koenig@amd.com> Cc: Huang Rui <ray.huang@amd.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: "VMware Graphics" <linux-graphics-maintainer@vmware.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191104173801.2972-3-daniel.vetter@ffwll.ch
-
- 04 11月, 2019 2 次提交
-
-
由 Thomas Hellstrom 提交于
The default TTM fault handler may not be completely sufficient (vmwgfx needs to do some bookkeeping, control the write protectionand also needs to restrict the number of prefaults). Also make it possible replicate ttm_bo_vm_reserve() functionality for, for example, mkwrite handlers. So turn the TTM vm code into helpers: ttm_bo_vm_fault_reserved(), ttm_bo_vm_open(), ttm_bo_vm_close() and ttm_bo_vm_reserve(). Also provide a default TTM fault handler for other drivers to use. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/332900/?series=67217&rev=1Signed-off-by: NChristian König <christian.koenig@amd.com>
-
由 Thomas Hellstrom 提交于
The explicit typcasts are meaningless, so remove them. Suggested-by: NMatthew Wilcox <willy@infradead.org> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/332899/Signed-off-by: NChristian König <christian.koenig@amd.com>
-
- 30 10月, 2019 1 次提交
-
-
由 Rob Herring 提交于
Commit c40069cb ("drm: add mmap() to drm_gem_object_funcs") introduced a GEM object mmap() hook which is expected to subtract the fake offset from vm_pgoff. However, for mmap() on dmabufs, there is not a fake offset. To fix this, let's always call mmap() object callback with an offset of 0, and leave it up to drm_gem_mmap_obj() to remove the fake offset. TTM still needs the fake offset, so we have to add it back until that's fixed. Fixes: c40069cb ("drm: add mmap() to drm_gem_object_funcs") Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NRob Herring <robh@kernel.org> Acked-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20191024191859.31700-1-robh@kernel.org
-
- 25 10月, 2019 1 次提交
-
-
由 Christian König 提交于
As the name says global memory and bo accounting is global. So it doesn't make to much sense having pointers to global structures all around the code. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NThomas Hellström <thellstrom@vmware.com> Link: https://patchwork.freedesktop.org/patch/332879/
-
- 17 10月, 2019 2 次提交
-
-
由 Gerd Hoffmann 提交于
Rename ttm_fbdev_mmap to ttm_bo_mmap_obj. Move the vm_pgoff sanity check to amdgpu_bo_fbdev_mmap (only ttm_fbdev_mmap user in tree). The ttm_bo_mmap_obj function can now be used to map any buffer object. This allows to implement &drm_gem_object_funcs.mmap in gem ttm helpers. v3: patch added to series Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Acked-by: NThomas Zimmermann <tzimmermann@suse.de> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20191016115203.20095-8-kraxel@redhat.com
-
由 Gerd Hoffmann 提交于
Factor out ttm vma setup to a new function. Reduces code duplication a bit. v2: don't change vm_flags (moved to separate patch). v4: make ttm_bo_mmap_vma_setup static. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20191016115203.20095-7-kraxel@redhat.com
-
- 14 10月, 2019 1 次提交
-
-
由 Thomas Hellstrom 提交于
Commit 4daa4fba ("gpu: drm: ttm: Adding new return type vm_fault_t") broke TTM prefaulting. Since vmf_insert_mixed() typically always returns VM_FAULT_NOPAGE, prefaulting stops after the second PTE. Restore (almost) the original behaviour. Unfortunately we can no longer with the new vm_fault_t return type determine whether a prefaulting PTE insertion hit an already populated PTE, and terminate the insertion loop. Instead we continue with the pre-determined number of prefaults. Fixes: 4daa4fba ("gpu: drm: ttm: Adding new return type vm_fault_t") Cc: Souptick Joarder <jrdr.linux@gmail.com> Cc: Christian König <christian.koenig@amd.com> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/330387/
-
- 11 9月, 2019 1 次提交
-
-
由 Gerd Hoffmann 提交于
Rename the embedded struct vma_offset_manager, new name is _vma_manager. ttm_bo_device.vma_manager changed to a pointer. The ttm_bo_device_init() function gets an additional vma_manager argument which allows to initialize ttm with a different vma manager. When passing NULL the embedded _vma_manager is used. All callers are updated to pass NULL, so the behavior doesn't change. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NChristian König <christian.koenig@amd.com> Link: http://patchwork.freedesktop.org/patch/msgid/20190905070509.22407-2-kraxel@redhat.com
-
- 13 8月, 2019 1 次提交
-
-
由 Christian König 提交于
Be more consistent with the naming of the other DMA-buf objects. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/323401/
-
- 06 8月, 2019 2 次提交
-
-
由 Gerd Hoffmann 提交于
Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Link: http://patchwork.freedesktop.org/patch/msgid/20190805140119.7337-11-kraxel@redhat.com
-
由 Gerd Hoffmann 提交于
Drop vma_node from ttm_buffer_object, use the gem struct (base.vma_node) instead. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Link: http://patchwork.freedesktop.org/patch/msgid/20190805140119.7337-9-kraxel@redhat.com
-
- 16 7月, 2019 1 次提交
-
-
git://people.freedesktop.org/~thomash/linux由 Dave Airlie 提交于
This reverts commit 031e610a, reversing changes made to 52d2d44e. The mm changes in there we premature and not fully ack or reviewed by core mm folks, I dropped the ball by merging them via this tree, so lets take em all back out. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 18 6月, 2019 2 次提交
-
-
由 Thomas Hellstrom 提交于
With the vmwgfx dirty tracking, the default TTM fault handler is not completely sufficient (vmwgfx need to modify the vma->vm_flags member, and also needs to restrict the number of prefaults). We also want to replicate the new ttm_bo_vm_reserve() functionality So start turning the TTM vm code into helpers: ttm_bo_vm_fault_reserved() and ttm_bo_vm_reserve(), and provide a default TTM fault handler for other drivers to use. Cc: "Christian König" <christian.koenig@amd.com> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: "Christian König" <christian.koenig@amd.com> #v1
-
由 Thomas Hellstrom 提交于
Add a pointer to the struct vm_operations_struct in the bo_device, and assign that pointer to the default value currently used. The driver can then optionally modify that pointer and the new value can be used for each new vma created. Cc: "Christian König" <christian.koenig@amd.com> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
- 20 3月, 2019 2 次提交
-
-
由 Thomas Zimmermann 提交于
GEM defines DRM_FILE_PAGE_OFFSET_{START,SIZE} constants for the mmap-able range of addresses. TTM can use them as well. Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Thomas Zimmermann 提交于
A BO's address has to be at least the minimum offset. Sharing this test in ttm_bo_mmap() removes code from drivers. A full buffer-address validation is still done within drm_vma_offset_lockup_locked(). Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 26 1月, 2019 1 次提交
-
-
由 Christian König 提交于
Move the BO on the LRU only when it is actually moved by a DMA operation. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Tested-And-Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 28 9月, 2018 1 次提交
-
-
由 Thomas Hellstrom 提交于
Export ttm_bo_get_unless_zero() to be used when looking up buffer objects that are removed from the lookup structure in the destructor. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NSinclair Yeh <syeh@vmware.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
- 11 7月, 2018 2 次提交
-
-
由 Thomas Zimmermann 提交于
A call to ttm_bo_unref() clears the supplied pointer to NULL, while ttm_bo_put() does not. None of the converted call sites requires the pointer to become NULL, so the respective assign operations has been left out from the patch. Signed-off-by: NThomas Zimmermann <contact@tzimmermann.org> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Thomas Zimmermann 提交于
Signed-off-by: NThomas Zimmermann <contact@tzimmermann.org> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 6月, 2018 1 次提交
-
-
由 Souptick Joarder 提交于
Use new return type vm_fault_t for fault handler. For now, this is just documenting that the function returns a VM_FAULT value rather than an errno. Once all instances are converted, vm_fault_t will become a distinct type. Ref-> commit 1c8f4220 ("mm: change return type to vm_fault_t") Previously vm_insert_{mixed,pfn} returns err which driver mapped into VM_FAULT_* type. The new function vmf_insert_{mixed,pfn} will replace this inefficiency by returning VM_FAULT_* type. Signed-off-by: NSouptick Joarder <jrdr.linux@gmail.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 16 5月, 2018 1 次提交
-
-
由 Dirk Hohndel 提交于
This is dual licensed under GPL-2.0 or MIT. Signed-off-by: NDirk Hohndel (VMware) <dirk@hohndel.org> Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 27 2月, 2018 2 次提交
-
-
由 Roger He 提交于
set TTM_OPT_FLAG_FORCE_ALLOC when we are servicing for page fault routine. for ttm_mem_global_reserve if in page fault routine, allow the gtt pages reservation always. because page fault routing already grabbed system memory and the allowance of this exception is harmless. Otherwise, it will trigger OOM killer. will be used later. v2: set the FORCE_ALLOC always v3: minor refine Signed-off-by: NRoger He <Hongbo.He@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
To aid debugging set the page mapping during allocation instead of during VM faults. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NRoger He <Hongbo.He@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 20 2月, 2018 2 次提交
-
-
由 Christian König 提交于
Stop calling the driver callback directly. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NRoger He <Hongbo.He@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Tom St Denis 提交于
The dual ret/retval was more complex than need be. Now we drop the retval variable and assign the appropriate VM codes to ret instead. Signed-off-by: NTom St Denis <tom.stdenis@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 30 1月, 2018 1 次提交
-
-
由 Tom St Denis 提交于
The buf pointer was not being incremented inside the loop meaning the same block of data would be read or written repeatedly. (v2) Change 'buf' pointer to uint8_t* type Cc: stable@vger.kernel.org Fixes: 09ac4fcb ("drm/ttm: Implement vm_operations_struct.access v2") Signed-off-by: NTom St Denis <tom.stdenis@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-