- 13 12月, 2012 2 次提交
-
-
由 Jerome Glisse 提交于
Set the proper number of tile pipe that should be a multiple of pipe depending on the number of se engine. Fix: https://bugs.freedesktop.org/show_bug.cgi?id=56405 https://bugs.freedesktop.org/show_bug.cgi?id=56720 v2: Don't change sumo2 Signed-off-by: NJerome Glisse <jglisse@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Jerome Glisse 提交于
The bo creation placement is where the bo will be. Instead of trying to move bo at each command stream let this work to another worker thread that will use more advance heuristic. agd5f: remove leftover unused variable Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 11 12月, 2012 10 次提交
-
-
由 Alex Deucher 提交于
DMA engine has special packets to facilitate this and it also keeps the 3D engine free for other things. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Async DMA has a special packet for contiguous pt updates which saves overhead. v2: rebase Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
DMA engine has special packets to facilitate this and it also keeps the 3D engine free for other things. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Async DMA has a special packet for contiguous pt updates which saves overhead. v2: leave the CP method enabled for now as doing the updates in the DMA rings is not working properly yet. v3: update for 2 level pts v4: rebase v5: drop pte/pde packet. doesn't seem to work on NI. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Pretty much the same as cayman. Some changes to the copy packets. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
There are 2 async DMA engines on cayman, one at 0xd000 and one at 0xd800. The programming interface is the same as evergreen however there are some changes to the commands for using vmids. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Pretty similar to 6xx/7xx except the count field increased in the packet header and the max IB size increased. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Uses the new multi-ring infrastucture. 6xx/7xx has a single async DMA ring. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 10 12月, 2012 6 次提交
-
-
由 Maarten Lankhorst 提交于
All items on the lru list are always reservable, so this is a stupid thing to keep. Not only that, it is used in a way which would guarantee deadlocks if it were ever to be set to block on reserve. This is a lot of churn, but mostly because of the removal of the argument which can be nested arbitrarily deeply in many places. No change of code in this patch except removal of the no_wait_reserve argument, the previous patch removed the use of no_wait_reserve. v2: - Warn if -EBUSY is returned on reservation, all objects on the list should be reservable. Adjusted patch slightly due to conflicts. v3: - Focus on no_wait_reserve removal only. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Replace the goto loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation, because previously buffers on the lru list were always guaranteed to be reservable. This should work since currently ttm guarantees items on the lru are always reservable, and reserving items blockingly with some bo held are enough to cause you to run into a deadlock. Currently this is not a concern since removal off the lru list and reservations are always done with atomically, but when this guarantee no longer holds, we have to handle this situation or end up with possible deadlocks. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
Replace the while loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
By removing the unlocking of lru and retaking it immediately, a race is removed where the bo is taken off the swap list or the lru list between the unlock and relock. As such the cleanup_refs code can be simplified, it will attempt to call ttm_bo_wait non-blockingly, and if it fails it will drop the locks and perform a blocking wait, or return an error if no_wait_gpu was set. The need for looping is also eliminated, since swapout and evict_mem_first will always follow the destruction path, no new fence is allowed to be attached. As far as I can see this may already have been the case, but the unlocking / relocking required a complicated loop to deal with re-reservation. Changes since v1: - Simplify no_wait_gpu case by folding it in with empty ddestroy. - Hold a reservation while calling ttm_bo_cleanup_memtype_use again. Changes since v2: - Do not remove bo from lru list while waiting Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
The few places that care should have those checks instead. This allows destruction of bo backed memory without a reservation. It's required for being able to rework the delayed destroy path, as it is no longer guaranteed to hold a reservation before unlocking. However any previous wait is still guaranteed to complete, and it's one of the last things to be done before the buffer object is freed. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
This requires changing the order in ttm_bo_cleanup_refs_or_queue to take the reservation first, as there is otherwise no race free way to take lru lock before fence_lock. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 08 12月, 2012 6 次提交
-
-
由 Alex Deucher 提交于
Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Need to use the adjusted mode since we are sending native timing and using the scaler for non-native modes. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
Add requests to get the number of shader engines (SE) and the number of SH per SE. These are needed for geometry and tesselation shaders in the 3D driver as well as setting up PA_SC_RASTER_CONFIG on SI asics. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Fixes flickering with some high res montiors. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> CC: stable@vger.kernel.org
-
由 Jerome Glisse 提交于
Force the use of cached memory when evicting from vram on non agp hardware. Also force write combine on agp hw. This is to insure the minimum cache type change when allocating memory and improving memory eviction especialy on pci/pcie hw. Signed-off-by: NJerome Glisse <jglisse@redhat.com>
-
由 Christian König 提交于
Redirect invalid memory accesses to the default page instead of locking up the memory controller. Also enable the invalid memory access interrupts and start spamming system log with it. v2 (agd5f): fix up against 2 level PT changes Signed-off-by: NChristian König <deathsimple@vodafone.de> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
- 05 12月, 2012 12 次提交
-
-
由 Rahul Sharma 提交于
This patch adds code for composing AVI and AUI info frames and send them every VSYNC. This patch is important for hdmi certification. v3: - Moved enums, macros to exynos_hdmi.c. - Corrected hex format. - Added static to hdmi_reg_infoframe. v2: - Added few blank lines. - Corrected comments format. - Added comments for 2's Complement calculation for check sum. v1: - Remove un-necessary blank lines. - Change the case of hex constants. Signed-off-by: NRahul Sharma <rahul.sharma@samsung.com> Signed-off-by: NFahad Kunnathadi <fahad.k@samsung.com> Signed-off-by: NShirish S <s.shirish@samsung.com> Acked-by: NSeung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Sachin Kamat 提交于
devm_clk_get is device managed and makes error handling and exit code simpler. Also fixes an error related to returning 'ret' without initialising with error code. Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Sachin Kamat 提交于
devm_* functions are device managed and make error handling and exit code simpler. Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Sachin Kamat 提交于
devm_clk_get is device managed and makes error handling and exit code simpler. Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Sachin Kamat 提交于
Pointer was being dereferenced after freeing. Fixes the following error: drivers/gpu/drm/exynos/exynos_drm_g2d.c:323 g2d_userptr_put_dma_addr() error: dereferencing freed memory 'g2d_userptr' Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Sachin Kamat 提交于
devm_clk_get is device managed and makes error handling and exit code simpler. Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Prathyush K 提交于
The 'pages' structure in the exynos gem buffer has been removed. So we get the fix.smem_start from the first sgl of the scatter gather table. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Rahul Sharma 提交于
This patch is to preserve the display mode header during the mode adjustment. Display mode header is overwritten with the adjusted mode header which is throwing the stack dump. Signed-off-by: NRahul Sharma <rahul.sharma@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Egbert Eich 提交于
drm_get_edid() returns a pointer to an EDID block. The caller is responsible to free this pointer itself. Here the pointer gets assigned to the local variable raw_edid. Therefore it should be freed before the variable goes out of scope. Signed-off-by: NEgbert Eich <eich@suse.de> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Prathyush K 提交于
Changelog v2: Removed redundant check for invalid sgl. Added check for valid page_offset in the beginning of exynos_drm_gem_map_buf. Changelog v1: The 'pages' structure is not required since we can use the 'sgt'. Even for CONTIG buffers, a SGT is created (which will have just one sgl). This SGT can be used during mmap instead of 'pages'. The 'page_size' element of the structure is also not used anywhere and is removed. This patch also fixes a memory leak where the 'pages' structure was being allocated during gem buffer allocation but not being freed during deallocate. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Prathyush K 提交于
Changelog v3: Passing the actual buffer size instead of vm_size to dma_mmap_attrs. Changelog v2: Extracting the private data from fb_info structure to obtain the exynos gem buffer structure. Now, dma address is obtained from the exynos gem buffer structure and not from smem_start. Also calling dma_mmap_attrs (instead of dma_mmap_writecombine) with the same attributes used during allocation. Changelog v1: This patch adds a exynos drm specific implementation of fb_mmap which supports mapping a non-contiguous buffer to user space. This new function does not assume that the frame buffer is contiguous and calls dma_mmap_writecombine for mapping the buffer to user space. dma_mmap_writecombine will be able to map a contiguous buffer as well as non-contig buffer depending on whether an IOMMU mapping is created for drm or not. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Inki Dae 提交于
Changelog v2: fix a little bit performance issue to previous patch. - When drm framebuffer is destroyed, make sure that overlay data are updated to real hardwrae for all encoders instead of waiting for vblank every page flip request. For this, it adds a new function, exynos_drm_encoder_complete_scanout function. Changelog v1: This patch removes wait_for_vblank call from exynos_drm_encoder_plane_disable function and move it to exynos_drm_encoder_plane_commit function. Disabling dma channel to each plane doens't need vblank signal to update data to real hardware. But updating overlay data to real hardware does need vblank signal. Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
- 04 12月, 2012 4 次提交
-
-
由 Inki Dae 提交于
Changelog v3: use drm_file's file object instead of gem object's - gem object's file represents the shmem storage so process-unique file object should be used instead. Changelog v2: call mutex_lock before drm_vm_open_locked is called. Changelog v1: This patch makes it takes a reference to gem object when specific gem mmap is requested. For this, it sets dev->driver->gem_vm_ops to vma->vm_ops. And this patch is based on exynos-drm-next-iommu branch of git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynosSigned-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Inki Dae 提交于
This patch adds userptr feautre for G2D module. The userptr means user space address allocated by malloc(). And the purpose of this feature is to make G2D's dma able to access the user space region. To user this feature, user should flag G2D_BUF_USRPTR to offset variable of struct drm_exynos_g2d_cmd and fill struct drm_exynos_g2d_userptr with user space address and size for it and then should set a pointer to drm_exynos_g2d_userptr object to data variable of struct drm_exynos_g2d_cmd. The last bit of offset variable is used to check if the cmdlist's buffer type is userptr or not. If userptr, the g2d driver gets user space address and size and then gets pages through get_user_pages(). (another case is counted as gem handle) Below is sample codes: static void set_cmd(struct drm_exynos_g2d_cmd *cmd, unsigned long offset, unsigned long data) { cmd->offset = offset; cmd->data = data; } static int solid_fill_test(int x, int y, unsigned long userptr) { struct drm_exynos_g2d_cmd cmd_gem[5]; struct drm_exynos_g2d_userptr g2d_userptr; unsigned int gem_nr = 0; ... g2d_userptr.userptr = userptr; g2d_userptr.size = x * y * 4; set_cmd(&cmd_gem[gem_nr++], DST_BASE_ADDR_REG | G2D_BUF_USERPTR, (unsigned long)&g2d_userptr); ... } int main(int argc, char **argv) { unsigned long addr; ... addr = malloc(x * y * 4); ... solid_fill_test(x, y, addr); ... } And next, the pages are mapped with iommu table and the device address is set to cmdlist so that G2D's dma can access it. As you may know, the pages from get_user_pages() are pinned. In other words, they CAN NOT be migrated and also swapped out. So the dma access would be safe. But the use of userptr feature has performance overhead so this patch also has memory pool to the userptr feature. Please, assume that user sends cmdlist filled with userptr and size every time to g2d driver, and the get_user_pages funcion will be called every time. The memory pool has maximum 64MB size and the userptr that user had ever sent, is holded in the memory pool. This meaning is that if the userptr from user is same as one in the memory pool, device address to the userptr in the memory pool is set to cmdlist. And last, the pages from get_user_pages() will be freed once user calls free() and the dma access is completed. Actually, get_user_pages() takes 2 reference counts if the user process has never accessed user region allocated by malloc(). Then, if the user calls free(), the page reference count becomes 1 and becomes 0 with put_page() call. And the reverse holds as well. This means how the pages backed are used by dma and freed. This patch is based on "drm/exynos: add iommu support for g2d", https://patchwork.kernel.org/patch/1629481/Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Prathyush K 提交于
The function dma_get_sgtable will allocate a sg table internally so it is not necessary to allocate a sg table before it. The unnecessary 'sg_alloc_table' call is removed. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Rahul Sharma 提交于
This patch fixes the problem of mapping contigous and non contigous dma buffers. Currently page struct is calculated from the buf->dma_addr which is not the physical address. It is replaced by buf->pages which points to the page struct of the first page of contigous memory chunk. This gives the correct page frame number for mapping. Non-contigous dma buffers are described using SG table and SG lists. Each valid SG List is pointing to a single page or group of pages which are physically contigous. Current implementation just maps the first page of each SG List and leave the other pages unmapped, leading to a crash. Given solution finds the page struct for the faulting page through parsing SG table and map it. Signed-off-by: NRahul Sharma <rahul.sharma@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-