- 28 6月, 2013 3 次提交
-
-
由 Will Deacon 提交于
The current code only clobbers a local variable, so the device is left with a stale mapping pointer. Cc: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Acked-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Will Deacon 提交于
IOMMU mappings take a prot parameter, identifying the protection bits to enforce on the newly created mapping (READ or WRITE). The ARM dma-mapping framework currently just passes 0 as the prot argument, resulting in faulting mappings. This patch infers the protection attributes based on the direction of the DMA transfer. Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 YoungJun Cho 提交于
In __iommu_get_pages(), the cpu_addr is checked wheather in atomic_pool range or not. So if the cpu_addr is in atomic_pool range, it does not need to check twice. Signed-off-by: NYoungJun Cho <yj44.cho@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 17 4月, 2013 1 次提交
-
-
由 Joonsoo Kim 提交于
In kmap_atomic(), kmap_high_get() is invoked for checking already mapped area. In __flush_dcache_page() and dma_cache_maint_page(), we explicitly call kmap_high_get() before kmap_atomic() when cache_is_vipt(), so kmap_high_get() can be invoked twice. This is useless operation, so remove one. v2: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to be self-documented Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 3月, 2013 1 次提交
-
-
由 Marek Szyprowski 提交于
Atomic pool should always be allocated from DMA zone if such zone is available in the system to avoid issues caused by limited dma mask of any of the devices used for making an atomic allocation. Reported-by: NKrzysztof Halasa <khc@pm.waw.pl> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Cc: Stable <stable@vger.kernel.org> [v3.6+]
-
- 25 2月, 2013 7 次提交
-
-
由 Marek Szyprowski 提交于
This patch removes page_address() usage in IOMMU-aware dma-mapping implementation and replaced it with direct use of the cpu virtual address provided by the caller. page_address() returned incorrect address for pages remapped in atomic pool, what caused memory leak. Reported-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NHiroshi Doyu <hdoyu@nvidia.com>
-
由 Seung-Woo Kim 提交于
Alignment order for a dma iommu buffer is set by buffer size. For large buffer, it is a waste of iommu address space. So configurable parameter to limit maximum alignment order can reduce the waste. Signed-off-by: NSeung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: NKyungmin.park <kyungmin.park@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
IOMMU can provide access to any memory page, so there is no point in limiting the allocated pages only to lowmem, once other parts of dma-mapping subsystem correctly supports himem pages. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com>
-
由 Prathyush K 提交于
This patch adds EXPORT_SYMBOL_GPL calls to the three arm iommu functions - arm_iommu_create_mapping, arm_iommu_free_mapping and arm_iommu_attach_device. These three functions are arm specific wrapper functions for creating/freeing/using an iommu mapping and they are called by various drivers. If any of these drivers need to be built as dynamic modules, these functions need to be exported. Changelog v2: using EXPORT_SYMBOL_GPL as suggested by Marek. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> [m.szyprowski: extended with recently introduced EXPORT_SYMBOL_GPL(arm_iommu_detach_device)] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
A counter part of arm_iommu_attach_device(). Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 08 2月, 2013 1 次提交
-
-
由 Russell King 提交于
Realview fails to boot with this warning: BUG: spinlock lockup suspected on CPU#0, init/1 lock: 0xcf8bde10, .magic: dead4ead, .owner: init/1, .owner_cpu: 0 Backtrace: [<c00185d8>] (dump_backtrace+0x0/0x10c) from [<c03294e8>] (dump_stack+0x18/0x1c) r6:cf8bde10 r5:cf83d1c0 r4:cf8bde10 r3:cf83d1c0 [<c03294d0>] (dump_stack+0x0/0x1c) from [<c018926c>] (spin_dump+0x84/0x98) [<c01891e8>] (spin_dump+0x0/0x98) from [<c0189460>] (do_raw_spin_lock+0x100/0x198) [<c0189360>] (do_raw_spin_lock+0x0/0x198) from [<c032cbac>] (_raw_spin_lock+0x3c/0x44) [<c032cb70>] (_raw_spin_lock+0x0/0x44) from [<c01c9224>] (pl011_console_write+0xe8/0x11c) [<c01c913c>] (pl011_console_write+0x0/0x11c) from [<c002aea8>] (call_console_drivers.clone.7+0xdc/0x104) [<c002adcc>] (call_console_drivers.clone.7+0x0/0x104) from [<c002b320>] (console_unlock+0x2e8/0x454) [<c002b038>] (console_unlock+0x0/0x454) from [<c002b8b4>] (vprintk_emit+0x2d8/0x594) [<c002b5dc>] (vprintk_emit+0x0/0x594) from [<c0329718>] (printk+0x3c/0x44) [<c03296dc>] (printk+0x0/0x44) from [<c002929c>] (warn_slowpath_common+0x28/0x6c) [<c0029274>] (warn_slowpath_common+0x0/0x6c) from [<c0029304>] (warn_slowpath_null+0x24/0x2c) [<c00292e0>] (warn_slowpath_null+0x0/0x2c) from [<c0070ab0>] (lockdep_trace_alloc+0xd8/0xf0) [<c00709d8>] (lockdep_trace_alloc+0x0/0xf0) from [<c00c0850>] (kmem_cache_alloc+0x24/0x11c) [<c00c082c>] (kmem_cache_alloc+0x0/0x11c) from [<c00bb044>] (__get_vm_area_node.clone.24+0x7c/0x16c) [<c00bafc8>] (__get_vm_area_node.clone.24+0x0/0x16c) from [<c00bb7b8>] (get_vm_area_caller+0x48/0x54) [<c00bb770>] (get_vm_area_caller+0x0/0x54) from [<c0020064>] (__alloc_remap_buffer.clone.15+0x38/0xb8) [<c002002c>] (__alloc_remap_buffer.clone.15+0x0/0xb8) from [<c0020244>] (__dma_alloc+0x160/0x2c8) [<c00200e4>] (__dma_alloc+0x0/0x2c8) from [<c00204d8>] (arm_dma_alloc+0x88/0xa0)[<c0020450>] (arm_dma_alloc+0x0/0xa0) from [<c00beb00>] (dma_pool_alloc+0xcc/0x1a8) [<c00bea34>] (dma_pool_alloc+0x0/0x1a8) from [<c01a9d14>] (pl08x_fill_llis_for_desc+0x28/0x568) [<c01a9cec>] (pl08x_fill_llis_for_desc+0x0/0x568) from [<c01aab8c>] (pl08x_prep_slave_sg+0x258/0x3b0) [<c01aa934>] (pl08x_prep_slave_sg+0x0/0x3b0) from [<c01c9f74>] (pl011_dma_tx_refill+0x140/0x288) [<c01c9e34>] (pl011_dma_tx_refill+0x0/0x288) from [<c01ca748>] (pl011_start_tx+0xe4/0x120) [<c01ca664>] (pl011_start_tx+0x0/0x120) from [<c01c54a4>] (__uart_start+0x48/0x4c) [<c01c545c>] (__uart_start+0x0/0x4c) from [<c01c632c>] (uart_start+0x2c/0x3c) [<c01c6300>] (uart_start+0x0/0x3c) from [<c01c795c>] (uart_write+0xcc/0xf4) [<c01c7890>] (uart_write+0x0/0xf4) from [<c01b0384>] (n_tty_write+0x1c0/0x3e4) [<c01b01c4>] (n_tty_write+0x0/0x3e4) from [<c01acfe8>] (tty_write+0x144/0x240) [<c01acea4>] (tty_write+0x0/0x240) from [<c01ad17c>] (redirected_tty_write+0x98/0xac) [<c01ad0e4>] (redirected_tty_write+0x0/0xac) from [<c00c371c>] (vfs_write+0xbc/0x150) [<c00c3660>] (vfs_write+0x0/0x150) from [<c00c39c0>] (sys_write+0x4c/0x78) [<c00c3974>] (sys_write+0x0/0x78) from [<c0014460>] (ret_fast_syscall+0x0/0x3c) This happens because the DMA allocation code is not respecting atomic allocations correctly. GFP flags should not be tested for GFP_ATOMIC to determine if an atomic allocation is being requested. GFP_ATOMIC is not a flag but a value. The GFP bitmask flags are all prefixed with __GFP_. The rest of the kernel tests for __GFP_WAIT not being set to indicate an atomic allocation. We need to do the same. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 1月, 2013 1 次提交
-
-
由 Russell King 提交于
Subhash Jadavani reported this partial backtrace: Now consider this call stack from MMC block driver (this is on the ARMv7 based board): [<c001b50c>] (v7_dma_inv_range+0x30/0x48) from [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) from [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) from [<c0017ff8>] (dma_map_sg+0x3c/0x114) This is caused by incrementing the struct page pointer, and running off the end of the sparsemem page array. Fix this by incrementing by pfn instead, and convert the pfn to a struct page. Cc: <stable@vger.kernel.org> Suggested-by: NJames Bottomley <JBottomley@Parallels.com> Tested-by: NSubhash Jadavani <subhashj@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 11月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for DMA_ATTR_FORCE_CONTIGUOUS attribute for dma_alloc_attrs() in IOMMU-aware implementation. For allocating physically contiguous buffers Contiguous Memory Allocator is used. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 22 11月, 2012 1 次提交
-
-
由 Gregory CLEMENT 提交于
Expose another DMA operations function: arm_dma_set_mask. This function will be added to a custom DMA ops for Armada 370/XP. Depending of its configuration Armada 370/XP can be set as a "nearly" coherent architecture. In this case the DMA ops is made of: - specific functions for this architecture - already exposed arm DMA related functions - the arm_dma_set_mask which was not exposed yet. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 24 10月, 2012 1 次提交
-
-
由 Jingoo Han 提交于
Fix build warning in __dma_alloc() as below: arch/arm/mm/dma-mapping.c: In function '__dma_alloc': arch/arm/mm/dma-mapping.c:653:29: warning: 'page' may be used uninitialized in this function [-Wuninitialized] Signed-off-by: NJingoo Han <jg1.han@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 02 10月, 2012 5 次提交
-
-
由 Hiroshi Doyu 提交于
Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Rob Herring 提交于
Remove arch_is_coherent() from iommu dma ops and implement separate coherent ops functions. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Rob Herring 提交于
arch_is_coherent is problematic as it is a global symbol. This doesn't work for multi-platform kernels or platforms which can support per device coherent DMA. This adds arm_coherent_dma_ops to be used for devices which connected coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops are modified at boot when arch_is_coherent is true. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
With many IOMMU'able devices, console gets noisy. Tegra30 has a few dozen of IOMMU'able devices. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
Skip unnecessary operations if order == 0. A little bit easier to read. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 24 9月, 2012 1 次提交
-
-
由 Sachin Kamat 提交于
When either of __alloc_from_contiguous or __alloc_remap_buffer fails to provide a valid pointer, allocated memory is freed up and an error is returned. 'pages' was however not freed before returning error. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 10 9月, 2012 1 次提交
-
-
由 Thomas Petazzoni 提交于
The __free_from_pool() function was changed in e9da6e99. Unfortunately, the test that checks whether the provided (start,size) is within the DMA pool has been improperly modified. It used to be: if (start < coherent_head.vm_start || end > coherent_head.vm_end) Where coherent_head.vm_end was non-inclusive (i.e, it did not include the first byte after the pool). The test has been changed to: if (start < pool->vaddr || start > pool->vaddr + pool->size) So now pool->vaddr + pool->size is inclusive (i.e, it includes the first byte after the pool), so the test should be >= instead of >. This bug causes the following message when freeing the *first* DMA coherent buffer that has been allocated, because its virtual address is exactly equal to pool->vaddr + pool->size : WARNING: at /home/thomas/projets/linux-2.6/arch/arm/mm/dma-mapping.c:463 __free_from_pool+0xa4/0xc0() freeing wrong coherent size from pool Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Lior Amsalem <alior@marvell.com> Cc: Maen Suleiman <maen@marvell.com> Cc: Tawfik Bayouk <tawfik@marvell.com> Cc: Shadi Ammouri <shadi@marvell.com> Cc: Eran Ben-Avi <benavi@marvell.com> Cc: Yehuda Yitschak <yehuday@marvell.com> Cc: Nadav Haklai <nadavh@marvell.com> [m.szyprowski: rebased onto v3.6-rc5 and resolved conflict] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 29 8月, 2012 6 次提交
-
-
由 Hiroshi Doyu 提交于
Make use of the same atomic pool as DMA does, and skip a kernel page mapping which can involve sleep'able operations at allocating a kernel page table. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
Support atomic allocation in __iommu_get_pages(). Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> [moved __atomic_get_pages() under #ifdef CONFIG_ARM_DMA_USE_IOMMU to avoid unused fuction warning for no-IOMMU case] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
Check the given range("start", "size") is included in "atomic_pool" or not. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
struct page **pages is necessary to align with non atomic path in __iommu_get_pages(). atomic_pool() has the intialized **pages instead of just *page. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
Print a loud warning when system runs out of memory from atomic DMA coherent pool to let users notice the potential problem. Reported-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
Some platforms might require to increase atomic coherent pool to make sure that their device will be able to allocate all their buffers from atomic context. This function can be also used to decrease atomic coherent pool size if coherent allocations are not used for the given sub-platform. Suggested-by: NJosh Coombs <josh.coombs@gmail.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 09 8月, 2012 3 次提交
-
-
由 Aaro Koskinen 提交于
Commit e9da6e99 (ARM: dma-mapping: remove custom consistent dma region) changed the way atomic allocations are handled. However, arm_dma_free() was not modified accordingly, and as a result freeing of atomic allocations does not work correctly when CMA is disabled. Memory is leaked and following WARNINGs are seen: [ 57.698911] ------------[ cut here ]------------ [ 57.753518] WARNING: at arch/arm/mm/dma-mapping.c:263 arm_dma_free+0x88/0xe4() [ 57.811473] trying to free invalid coherent area: e0848000 [ 57.867398] Modules linked in: sata_mv(-) [ 57.921373] [<c000d270>] (unwind_backtrace+0x0/0xf0) from [<c0015430>] (warn_slowpath_common+0x50/0x68) [ 58.033924] [<c0015430>] (warn_slowpath_common+0x50/0x68) from [<c00154dc>] (warn_slowpath_fmt+0x30/0x40) [ 58.152024] [<c00154dc>] (warn_slowpath_fmt+0x30/0x40) from [<c000dc18>] (arm_dma_free+0x88/0xe4) [ 58.219592] [<c000dc18>] (arm_dma_free+0x88/0xe4) from [<c008fa30>] (dma_pool_destroy+0x100/0x148) [ 58.345526] [<c008fa30>] (dma_pool_destroy+0x100/0x148) from [<c019a64c>] (release_nodes+0x144/0x218) [ 58.475782] [<c019a64c>] (release_nodes+0x144/0x218) from [<c0197e10>] (__device_release_driver+0x60/0xb8) [ 58.614260] [<c0197e10>] (__device_release_driver+0x60/0xb8) from [<c0198608>] (driver_detach+0xd8/0xec) [ 58.756527] [<c0198608>] (driver_detach+0xd8/0xec) from [<c0197c54>] (bus_remove_driver+0x7c/0xc4) [ 58.901648] [<c0197c54>] (bus_remove_driver+0x7c/0xc4) from [<c004bfac>] (sys_delete_module+0x19c/0x220) [ 59.051447] [<c004bfac>] (sys_delete_module+0x19c/0x220) from [<c0009140>] (ret_fast_syscall+0x0/0x2c) [ 59.207996] ---[ end trace 0745420412c0325a ]--- [ 59.287110] ------------[ cut here ]------------ [ 59.366324] WARNING: at arch/arm/mm/dma-mapping.c:263 arm_dma_free+0x88/0xe4() [ 59.450511] trying to free invalid coherent area: e0847000 [ 59.534357] Modules linked in: sata_mv(-) [ 59.616785] [<c000d270>] (unwind_backtrace+0x0/0xf0) from [<c0015430>] (warn_slowpath_common+0x50/0x68) [ 59.790030] [<c0015430>] (warn_slowpath_common+0x50/0x68) from [<c00154dc>] (warn_slowpath_fmt+0x30/0x40) [ 59.972322] [<c00154dc>] (warn_slowpath_fmt+0x30/0x40) from [<c000dc18>] (arm_dma_free+0x88/0xe4) [ 60.070701] [<c000dc18>] (arm_dma_free+0x88/0xe4) from [<c008fa30>] (dma_pool_destroy+0x100/0x148) [ 60.256817] [<c008fa30>] (dma_pool_destroy+0x100/0x148) from [<c019a64c>] (release_nodes+0x144/0x218) [ 60.445201] [<c019a64c>] (release_nodes+0x144/0x218) from [<c0197e10>] (__device_release_driver+0x60/0xb8) [ 60.634148] [<c0197e10>] (__device_release_driver+0x60/0xb8) from [<c0198608>] (driver_detach+0xd8/0xec) [ 60.823623] [<c0198608>] (driver_detach+0xd8/0xec) from [<c0197c54>] (bus_remove_driver+0x7c/0xc4) [ 61.013268] [<c0197c54>] (bus_remove_driver+0x7c/0xc4) from [<c004bfac>] (sys_delete_module+0x19c/0x220) [ 61.203472] [<c004bfac>] (sys_delete_module+0x19c/0x220) from [<c0009140>] (ret_fast_syscall+0x0/0x2c) [ 61.393390] ---[ end trace 0745420412c0325b ]--- The patch fixes this. Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Aaro Koskinen 提交于
The alignment mask is calculated incorrectly. Fixing the calculation makes strange hangs/lockups disappear during the boot with Amstrad E3 and 3.6-rc1 kernel. Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Chris Brand 提交于
Fix dma_contiguous_remap() so that it continues through all the regions, even after encountering one that is outside lowmem. Without this change, if you have two CMA regions, the first outside lowmem and the seocnd inside lowmem, only the second one will get set up in the MMU. Data written to that region then doesn't get automatically flushed from the cache into memory. Signed-off-by: NChris Brand <cbrand@broadcom.com> [extended patch subject with 'fix' word] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 30 7月, 2012 6 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC attribute for dma_(un)map_(single,page,sg) functions family. It lets dma mapping clients to create a mapping for the buffer for the given device without performing a CPU cache synchronization. CPU cache synchronization can be skipped for the buffers which it is known that they are already in 'device' domain (CPU caches have been already synchronized or there are only coherent mappings for the buffer). For advanced users only, please use it with care. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
This patch adds support for dma_get_sgtable() function which is required to let drivers to share the buffers allocated by DMA-mapping subsystem. Generic implementation based on virt_to_page() is not suitable for ARM dma-mapping subsystem. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
This patch adds support for DMA_ATTR_NO_KERNEL_MAPPING attribute for IOMMU allocations, what let drivers to save precious kernel virtual address space for large buffers that are intended to be accessed only from userspace. This patch is heavily based on initial work kindly provided by Abhinav Kochhar <abhinav.k@samsung.com>. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
This patch fixes incorrect check in error path. When the allocation of first page fails, the kernel ops appears due to accessing -1 element of the pages array. Reported-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
Add some sanity checks and forbid mmaping of buffers into vma areas larger than allocated dma buffer. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
This patch changes dma-mapping subsystem to use generic vmalloc areas for all consistent dma allocations. This increases the total size limit of the consistent allocations and removes platform hacks and a lot of duplicated code. Atomic allocations are served from special pool preallocated on boot, because vmalloc areas cannot be reliably created in atomic context. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com> Reviewed-by: NMinchan Kim <minchan@kernel.org>
-
- 16 7月, 2012 1 次提交
-
-
由 Prathyush K 提交于
WARNING: at mm/vmalloc.c:1471 __iommu_free_buffer+0xcc/0xd0() Trying to vfree() nonexistent vm area (ef095000) Modules linked in: [<c0015a18>] (unwind_backtrace+0x0/0xfc) from [<c0025a94>] (warn_slowpath_common+0x54/0x64) [<c0025a94>] (warn_slowpath_common+0x54/0x64) from [<c0025b38>] (warn_slowpath_fmt+0x30/0x40) [<c0025b38>] (warn_slowpath_fmt+0x30/0x40) from [<c0016de0>] (__iommu_free_buffer+0xcc/0xd0) [<c0016de0>] (__iommu_free_buffer+0xcc/0xd0) from [<c0229a5c>] (exynos_drm_free_buf+0xe4/0x138) [<c0229a5c>] (exynos_drm_free_buf+0xe4/0x138) from [<c022b358>] (exynos_drm_gem_destroy+0x80/0xfc) [<c022b358>] (exynos_drm_gem_destroy+0x80/0xfc) from [<c0211230>] (drm_gem_object_free+0x28/0x34) [<c0211230>] (drm_gem_object_free+0x28/0x34) from [<c0211bd0>] (drm_gem_object_release_handle+0xcc/0xd8) [<c0211bd0>] (drm_gem_object_release_handle+0xcc/0xd8) from [<c01abe10>] (idr_for_each+0x74/0xb8) [<c01abe10>] (idr_for_each+0x74/0xb8) from [<c02114e4>] (drm_gem_release+0x1c/0x30) [<c02114e4>] (drm_gem_release+0x1c/0x30) from [<c0210ae8>] (drm_release+0x608/0x694) [<c0210ae8>] (drm_release+0x608/0x694) from [<c00b75a0>] (fput+0xb8/0x228) [<c00b75a0>] (fput+0xb8/0x228) from [<c00b40c4>] (filp_close+0x64/0x84) [<c00b40c4>] (filp_close+0x64/0x84) from [<c0029d54>] (put_files_struct+0xe8/0x104) [<c0029d54>] (put_files_struct+0xe8/0x104) from [<c002b930>] (do_exit+0x608/0x774) [<c002b930>] (do_exit+0x608/0x774) from [<c002bae4>] (do_group_exit+0x48/0xb4) [<c002bae4>] (do_group_exit+0x48/0xb4) from [<c002bb60>] (sys_exit_group+0x10/0x18) [<c002bb60>] (sys_exit_group+0x10/0x18) from [<c000ee80>] (ret_fast_syscall+0x0/0x30) This patch modifies the condition while freeing to match the condition used while allocation. This fixes the above warning which arises when array size is equal to PAGE_SIZE where allocation is done using kzalloc but free is done using vfree. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-