- 29 9月, 2020 1 次提交
-
-
由 Thomas Zimmermann 提交于
This patch updates dma_buf_vmap() and dma-buf's vmap callback to use struct dma_buf_map. The interfaces used to return a buffer address. This address now gets stored in an instance of the structure that is given as an additional argument. The functions return an errno code on errors. Users of the functions are updated accordingly. This is only an interface change. It is currently expected that dma-buf memory can be accessed with system memory load/store operations. v3: * update fastrpc driver (kernel test robot) v2: * always clear map parameter in dma_buf_vmap() (Daniel) * include dma-buf-heaps and i915 selftests (kernel test robot) Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Acked-by: NChristian König <christian.koenig@amd.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NTomasz Figa <tfiga@chromium.org> Acked-by: NSam Ravnborg <sam@ravnborg.org> Link: https://patchwork.freedesktop.org/patch/msgid/20200925115601.23955-3-tzimmermann@suse.de
-
- 25 9月, 2020 1 次提交
-
-
由 Thomas Zimmermann 提交于
GEM object functions deprecate several similar callback interfaces in struct drm_driver. This patch replaces the per-driver callbacks with per-instance callbacks in tegra. Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Acked-by: NThierry Reding <treding@nvidia.com> Acked-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200923102159.24084-16-tzimmermann@suse.de
-
- 09 9月, 2020 1 次提交
-
-
由 Gerd Hoffmann 提交于
Add drm_device argument to drm_prime_pages_to_sg(), so we can call dma_max_mapping_size() to figure the segment size limit and call into __sg_alloc_table_from_pages() with the correct limit. This fixes virtio-gpu with sev. Possibly it'll fix other bugs too given that drm seems to totaly ignore segment size limits so far ... v2: place max_segment in drm driver not gem object. v3: move max_segment next to the other gem fields. v4: just use dma_max_mapping_size(). Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20200907112425.15610-2-kraxel@redhat.com
-
- 20 5月, 2020 1 次提交
-
-
由 Emil Velikov 提交于
Spelling out _unlocked for each and every driver is a annoying. Especially if we consider how many drivers, do not know (or need to) about the horror stories involving struct_mutex. Just drop the suffix. It makes the API cleaner. Done via the following script: __from=drm_gem_object_put_unlocked __to=drm_gem_object_put for __file in $(git grep --name-only $__from); do sed -i "s/$__from/$__to/g" $__file; done Cc: Thierry Reding <thierry.reding@gmail.com> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: NEmil Velikov <emil.velikov@collabora.com> Acked-by: NSam Ravnborg <sam@ravnborg.org> Acked-by: NThierry Reding <treding@nvidia.com> Acked-by: NThomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20200515095118.2743122-32-emil.l.velikov@gmail.com
-
- 07 2月, 2020 1 次提交
-
-
由 Thierry Reding 提交于
This partially reverts the DMA API support that was recently merged because it was causing performance regressions on older Tegra devices. Unfortunately, the cache maintenance performed by dma_map_sg() and dma_unmap_sg() causes performance to drop by a factor of 10. The right solution for this would be to cache mappings for buffers per consumer device, but that's a bit involved. Instead, we simply revert to the old behaviour of sharing IOVA mappings when we know that devices can do so (i.e. they share the same IOMMU domain). Cc: <stable@vger.kernel.org> # v5.5 Reported-by: NDmitry Osipenko <digetx@gmail.com> Signed-off-by: NThierry Reding <treding@nvidia.com> Tested-by: NDmitry Osipenko <digetx@gmail.com> Reviewed-by: NDmitry Osipenko <digetx@gmail.com>
-
- 04 12月, 2019 2 次提交
-
-
由 Thierry Reding 提交于
All the display related blocks on Tegra require contiguous memory. Using the DMA API, there is no knowing at import time which device will end up using the buffer, so it's not known whether or not an IOMMU will be used to map the buffer. Move the check for non-contiguous buffers/mappings to the tegra_dc_pin() function which is now the earliest point where it is known if a DMA BUF can be used by the given device or not. v2: add check for contiguous buffer/mapping in tegra_dc_pin() Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Thierry Reding 提交于
Buffers that are imported from a DMA-BUF don't have pages allocated with them. At the same time an SG table for them can't be derived using the DMA API helpers because the necessary information doesn't exist. However there's already an SG table that was created during import, so this can simply be duplicated. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 26 11月, 2019 2 次提交
-
-
由 Daniel Vetter 提交于
No in-tree users left. Reviewed-by: NThierry Reding <treding@nvidia.com> Tested-by: NThierry Reding <treding@nvidia.com> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: linux-tegra@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20191118103536.17675-9-daniel.vetter@ffwll.ch
-
由 Daniel Vetter 提交于
It doesn't have any callers anymore. Aside: The ->mmap/munmap hooks have a bit a confusing name, they don't do userspace mmaps, but a kernel vmap. I think most places use vmap for this, except ttm, which uses kmap for vmap for added confusion. mmap seems entirely for userspace mappings set up through mmap(2) syscall. Reviewed-by: NThierry Reding <treding@nvidia.com> Tested-by: NThierry Reding <treding@nvidia.com> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: linux-tegra@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20191118103536.17675-3-daniel.vetter@ffwll.ch
-
- 29 10月, 2019 2 次提交
-
-
由 Thierry Reding 提交于
If host1x_bo_pin() returns an SG table, create a DMA mapping for the buffer. For buffers that the host1x client has already mapped itself, host1x_bo_pin() returns NULL and the existing DMA address is used. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Thierry Reding 提交于
The host1x_bo_pin() and host1x_bo_unpin() APIs are used to pin and unpin buffers during host1x job submission. Pinning currently returns the SG table and the DMA address (an IOVA if an IOMMU is used or a physical address if no IOMMU is used) of the buffer. The DMA address is only used for buffers that are relocated, whereas the host1x driver will map gather buffers into its own IOVA space so that they can be processed by the CDMA engine. This approach has a couple of issues. On one hand it's not very useful to return a DMA address for the buffer if host1x doesn't need it. On the other hand, returning the SG table of the buffer is suboptimal because a single SG table cannot be shared for multiple mappings, because the DMA address is stored within the SG table, and the DMA address may be different for different devices. Subsequent patches will move the host1x driver over to the DMA API which doesn't work with a single shared SG table. Fix this by returning a new SG table each time a buffer is pinned. This allows the buffer to be referenced by multiple jobs for different engines. Change the prototypes of host1x_bo_pin() and host1x_bo_unpin() to take a struct device *, specifying the device for which the buffer should be pinned. This is required in order to be able to properly construct the SG table. While at it, make host1x_bo_pin() return the SG table because that allows us to return an ERR_PTR()-encoded error code if we need to, or return NULL to signal that we don't need the SG table to be remapped and can simply use the DMA address as-is. At the same time, returning the DMA address is made optional because in the example of command buffers, host1x doesn't need to know the DMA address since it will have to create its own mapping anyway. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 28 10月, 2019 4 次提交
-
-
由 Thierry Reding 提交于
Instead of manually creating the SG table for a discontiguous buffer, use the existing sg_alloc_table_from_pages(). Note that this is not safe to be used with the ARM DMA/IOMMU integration code because that will not ensure that the whole buffer is mapped contiguously. Depending on the size of the individual entries the mapping may end up containing holes to ensure alignment. However, we only ever use these buffers with explicit IOMMU API usage and know how to avoid these holes. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Thierry Reding 提交于
When an importer wants to map a DMA-BUF, make sure to always actually map it, irrespective of whether the buffer is contiguous or not. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Thierry Reding 提交于
Rather than manually creating an SG table in an incorrect way, let the standard dma_get_sgtable() function do it. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Thierry Reding 提交于
The address can refer to either physical memory or IO virtual memory. If referring to IO virtual memory, there will always be an associated physical memory address. Rename this variable to "iova" to clarify in all cases that this is the IO virtual memory, which in the absence of an IOMMU is identical to the physical address. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 15 8月, 2019 1 次提交
-
-
由 Sam Ravnborg 提交于
Drop use of the deprecated drmP.h header file. For all touched files divide include files into blocks, and sort them within the blocks. Fix fallout. Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Reviewed-by: NThierry Reding <treding@nvidia.com> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: linux-tegra@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20190804094132.29463-3-sam@ravnborg.org
-
- 21 6月, 2019 1 次提交
-
-
由 Daniel Vetter 提交于
The idea is that gem_prime_export is deprecated in favor of obj_funcs.export. That's much easier to do if both have matching function signatures. Reviewed-by: NEric Anholt <eric@anholt.net> Reviewed-by: NEmil Velikov <emil.velikov@collabora.com> Acked-by: NChristian König <christian.koenig@amd.com> Acked-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <maxime.ripard@bootlin.com> Cc: Sean Paul <sean@poorly.run> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Eric Anholt <eric@anholt.net> Cc: "Michel Dänzer" <michel.daenzer@amd.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Huang Rui <ray.huang@amd.com> Cc: Felix Kuehling <Felix.Kuehling@amd.com> Cc: Hawking Zhang <Hawking.Zhang@amd.com> Cc: Feifei Xu <Feifei.Xu@amd.com> Cc: Jim Qu <Jim.Qu@amd.com> Cc: Evan Quan <evan.quan@amd.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Thomas Zimmermann <tdz@users.sourceforge.net> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Jilayne Lovejoy <opensource@jilayne.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mikulas Patocka <mpatocka@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Junwei Zhang <Jerry.Zhang@amd.com> Cc: intel-gvt-dev@lists.freedesktop.org Cc: intel-gfx@lists.freedesktop.org Cc: amd-gfx@lists.freedesktop.org Cc: linux-tegra@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20190614203615.12639-10-daniel.vetter@ffwll.ch
-
- 19 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NEnrico Weigelt <info@metux.net> Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org> Reviewed-by: NAllison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 18 4月, 2019 1 次提交
-
-
由 Dmitry Osipenko 提交于
The allocated pages need to be invalidated in CPU caches. On ARM32 the DMA_BIDIRECTIONAL flag only ensures that data is written-back to DRAM and the data stays in CPU cache lines. While the DMA_FROM_DEVICE flag ensures that the corresponding CPU cache lines are getting invalidated and nothing more, that's exactly what is needed for a newly allocated pages. This fixes randomly failing rendercheck tests on Tegra30 using the Opentegra driver for tests that use small-sized pixmaps (10x10 and less, i.e. 1-2 memory pages) because apparently CPU reads out stale data from caches and/or that data is getting evicted to DRAM at the time of HW job execution. Fixes: bd43c9f0 ("drm/tegra: gem: Map pages via the DMA API") Cc: stable <stable@vger.kernel.org> Signed-off-by: NDmitry Osipenko <digetx@gmail.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 20 6月, 2018 1 次提交
-
-
由 Christian König 提交于
Neither used nor correctly implemented anywhere. Just completely remove the interface. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Link: https://patchwork.freedesktop.org/patch/226645/
-
- 19 5月, 2018 1 次提交
-
-
由 Thierry Reding 提交于
Set the owner and name of the exported DMA-BUF in addition to the already filled-in fields. Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Tested-by: NDmitry Osipenko <digetx@gmail.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 17 5月, 2018 1 次提交
-
-
由 Souptick Joarder 提交于
Use new return type vm_fault_t for fault handler. For now, this is just documenting that the function returns a VM_FAULT value rather than an errno. Once all instances are converted, vm_fault_t will become a distinct type. Reference id -> 1c8f4220 ("mm: change return type to vm_fault_t") Previously vm_insert_page() returns err which driver mapped into VM_FAULT_* type. The new function vmf_insert_page() will replace this inefficiency by returning VM_FAULT_* type. Signed-off-by: NSouptick Joarder <jrdr.linux@gmail.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 17 3月, 2018 2 次提交
-
-
由 Thierry Reding 提交于
These callbacks allow the exporter to swap in and pin the backing storage for buffers as well as invalidate the cache in preparation for accessing the buffer from the CPU, and flush the cache and unpin the backing storage when the CPU is done modifying the buffer. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Thierry Reding 提交于
When allocating pages, map them with the DMA API in order to invalidate caches. This is the correct usage of the API and works just as well as faking up the SG table and using the dma_sync_sg_for_device() function. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 08 3月, 2018 1 次提交
-
-
由 Thierry Reding 提交于
This function allows mapping a GEM object into a virtual memory address space, which makes it useful outside of the GEM code. While at it, rename the function so it doesn't clash with the function that implements the DRM_TEGRA_GEM_MMAP IOCTL. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 21 12月, 2017 1 次提交
-
-
由 Dmitry Osipenko 提交于
iommu_map_sg() doesn't return a error value, but a size of the requested IOMMU mapping or zero in case of error. Signed-off-by: NDmitry Osipenko <digetx@gmail.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 17 8月, 2017 3 次提交
-
-
由 Thierry Reding 提交于
The mapping of PRIME buffers can reuse much of the GEM mapping code, so extract the common bits into a new tegra_gem_mmap() helper. Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Cihangir Akturk 提交于
Use drm_*_get() and drm_*_put() helpers instead of drm_*_reference() and drm_*_unreference() helpers. drm_*_reference() and drm_*_unreference() functions are just compatibility alias for drm_*_get() and drm_*_put() and should not be used by new code. So convert all users of compatibility functions to use the new APIs. Generated by: scripts/coccinelle/api/drm-get-put.cocci Signed-off-by: NCihangir Akturk <cakturk@gmail.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Noralf Trønnes 提交于
This driver can use the drm_driver.dumb_destroy and drm_driver.dumb_map_offset defaults, so no need to set them. Cc: Thierry Reding <thierry.reding@gmail.com> Signed-off-by: NNoralf Trønnes <noralf@tronnes.org> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/1502034068-51384-7-git-send-email-noralf@tronnes.org
-
- 15 6月, 2017 1 次提交
-
-
由 Dmitry Osipenko 提交于
If commands buffer claims a number of words that is higher than its BO can fit, a kernel OOPS will be fired on the out-of-bounds BO access. This was triggered by an opentegra Xorg driver that erroneously pushed too many commands to the pushbuf. The CDMA commands buffer address is 4 bytes aligned, so check its alignment. The maximum number of the CDMA gather fetches is 16383, add a check for it. Add a sanity check for the relocations in a same way. [ 46.829393] Unable to handle kernel paging request at virtual address f09b2000 ... [<c04a3ba4>] (host1x_job_pin) from [<c04dfcd0>] (tegra_drm_submit+0x474/0x510) [<c04dfcd0>] (tegra_drm_submit) from [<c04deea0>] (tegra_submit+0x50/0x6c) [<c04deea0>] (tegra_submit) from [<c04c07c0>] (drm_ioctl+0x1e4/0x3ec) [<c04c07c0>] (drm_ioctl) from [<c02541a0>] (do_vfs_ioctl+0x9c/0x8e4) [<c02541a0>] (do_vfs_ioctl) from [<c0254a1c>] (SyS_ioctl+0x34/0x5c) [<c0254a1c>] (SyS_ioctl) from [<c0107640>] (ret_fast_syscall+0x0/0x3c) Signed-off-by: NDmitry Osipenko <digetx@gmail.com> Reviewed-by: NErik Faye-Lund <kusmabite@gmail.com> Reviewed-by: NMikko Perttunen <mperttunen@nvidia.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 20 4月, 2017 1 次提交
-
-
由 Logan Gunthorpe 提交于
Seeing the kunmap_atomic dma_buf_ops share the same name with a macro in highmem.h, the former can be aliased if any dma-buf user includes that header. I'm personally trying to include highmem.h inside scatterlist.h and this breaks the dma-buf code proper. Christoph Hellwig suggested [1] renaming it and pushing this patch ASAP. To maintain consistency I've renamed all four of kmap* and kunmap* to be map* and unmap*. (Even though only kmap_atomic presently conflicts.) [1] https://www.spinics.net/lists/target-devel/msg15070.htmlSigned-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NSinclair Yeh <syeh@vmware.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Signed-off-by: NSumit Semwal <sumit.semwal@linaro.org> Link: http://patchwork.freedesktop.org/patch/msgid/1492630570-879-1-git-send-email-logang@deltatee.com
-
- 06 4月, 2017 1 次提交
-
-
由 Thierry Reding 提交于
IOMMU support is currently not thread-safe, which can cause crashes, amongst other things, under certain workloads. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 25 2月, 2017 1 次提交
-
-
由 Dave Jiang 提交于
->fault(), ->page_mkwrite(), and ->pfn_mkwrite() calls do not need to take a vma and vmf parameter when the vma already resides in vmf. Remove the vma parameter to simplify things. [arnd@arndb.de: fix ARM build] Link: http://lkml.kernel.org/r/20170125223558.1451224-1-arnd@arndb.de Link: http://lkml.kernel.org/r/148521301778.19116.10840599906674778980.stgit@djiang5-desk3.ch.intel.comSigned-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Jan Kara <jack@suse.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 2月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
The drm_mm range manager claimed to support top-down insertion, but it was neither searching for the top-most hole that could fit the allocation request nor fitting the request to the hole correctly. In order to search the range efficiently, we create a secondary index for the holes using either their size or their address. This index allows us to find the smallest hole or the hole at the bottom or top of the range efficiently, whilst keeping the hole stack to rapidly service evictions. v2: Search for holes both high and low. Rename flags to mode. v3: Discover rb_entry_safe() and use it! v4: Kerneldoc for enum drm_mm_insert_mode. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Sean Paul <seanpaul@chromium.org> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Christian Gmeiner <christian.gmeiner@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Stephen Warren <swarren@wwwdotorg.org> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Eric Anholt <eric@anholt.net> Cc: Sinclair Yeh <syeh@vmware.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: Sinclair Yeh <syeh@vmware.com> # vmwgfx Reviewed-by: Lucas Stach <l.stach@pengutronix.de> #etnaviv Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20170202210438.28702-1-chris@chris-wilson.co.uk
-
- 15 12月, 2016 1 次提交
-
-
由 Jan Kara 提交于
Every single user of vmf->virtual_address typed that entry to unsigned long before doing anything with it so the type of virtual_address does not really provide us any additional safety. Just use masked vmf->address which already has the appropriate type. Link: http://lkml.kernel.org/r/1479460644-25076-3-git-send-email-jack@suse.czSigned-off-by: NJan Kara <jack@suse.cz> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 11月, 2016 2 次提交
-
-
由 Mikko Perttunen 提交于
Fix tegra_bo_pin() to set the parameter sgt pointer. host1x job pinning requires the sgt to determine physical memory addresses of gathers. Signed-off-by: NMikko Perttunen <mperttunen@nvidia.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Arto Merilainen 提交于
host1x command buffer patching requires that the buffer object can be mapped into kernel address space. However, the recent addition of IOMMU support did not account for this requirement. Therefore host1x engines cannot be used if IOMMU is enabled. This patch implements kmap, kunmap, mmap and munmap functions to host1x buffer objects. Signed-off-by: NArto Merilainen <amerilainen@nvidia.com> Signed-off-by: NMikko Perttunen <mperttunen@nvidia.com> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 07 11月, 2016 1 次提交
-
-
由 Christophe JAILLET 提交于
dma_buf_map_attachment() never returns NULL, so there is no need to check for it. Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 05 10月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
dma_buf may live a long time, longer than the last direct user of the driver. We already hold a reference to the owner module (that prevents the object code from disappearing), but there is no reference to the drm_dev - so the pointers to the driver backend themselves may vanish. v2: Resist temptation to fix the bug in armada_gem.c not setting the correct flags on the exported dma-buf (it should pass the flags through and not be arbitrarily setting O_RDWR). Use a common wrapper for exporting the dmabuf and acquiring the reference to the drm_device. Testcase: igt/vgem_basic/unload Suggested-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Petri Latvala <petri.latvala@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Tested-by: NPetri Latvala <petri.latvala@intel.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161005122145.1507-2-chris@chris-wilson.co.uk
-
- 17 5月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
drm_gem_object_lookup() has never required the drm_device for its file local translation of the user handle to the GEM object. Let's remove the unused parameter and save some space. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: dri-devel@lists.freedesktop.org Cc: Dave Airlie <airlied@redhat.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Fixup kerneldoc too.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-