1. 24 9月, 2012 1 次提交
  2. 10 9月, 2012 1 次提交
    • T
      arm: mm: fix DMA pool affiliation check · f3d87524
      Thomas Petazzoni 提交于
      The __free_from_pool() function was changed in
      e9da6e99. Unfortunately, the test that
      checks whether the provided (start,size) is within the DMA pool has
      been improperly modified. It used to be:
      
        if (start < coherent_head.vm_start || end > coherent_head.vm_end)
      
      Where coherent_head.vm_end was non-inclusive (i.e, it did not include
      the first byte after the pool). The test has been changed to:
      
        if (start < pool->vaddr || start > pool->vaddr + pool->size)
      
      So now pool->vaddr + pool->size is inclusive (i.e, it includes the
      first byte after the pool), so the test should be >= instead of >.
      
      This bug causes the following message when freeing the *first* DMA
      coherent buffer that has been allocated, because its virtual address
      is exactly equal to pool->vaddr + pool->size :
      
      WARNING: at /home/thomas/projets/linux-2.6/arch/arm/mm/dma-mapping.c:463 __free_from_pool+0xa4/0xc0()
      freeing wrong coherent size from pool
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Lior Amsalem <alior@marvell.com>
      Cc: Maen Suleiman <maen@marvell.com>
      Cc: Tawfik Bayouk <tawfik@marvell.com>
      Cc: Shadi Ammouri <shadi@marvell.com>
      Cc: Eran Ben-Avi <benavi@marvell.com>
      Cc: Yehuda Yitschak <yehuday@marvell.com>
      Cc: Nadav Haklai <nadavh@marvell.com>
      [m.szyprowski: rebased onto v3.6-rc5 and resolved conflict]
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      f3d87524
  3. 29 8月, 2012 6 次提交
  4. 25 8月, 2012 3 次提交
    • J
      ARM: 7499/1: mm: Fix vmalloc overlap check for !HIGHMEM · 36418c51
      Jonathan Austin 提交于
      With !HIGHMEM, sanity_check_meminfo checks for banks that completely or
      partially overlap the vmalloc region. The test for partial overlap checks
      __va(bank->start + bank->size) > vmalloc_min. This is not appropriate if
      there is a non-linear translation between virtual and physical addresses,
      as bank->start + bank->size is actually in the bank following the one being
      interrogated.
      
      In most cases, even when using SPARSEMEM, this is not problematic as the
      subsequent bank will start at a higher va than the one in question. However
      if the physical to virtual address conversion is not monotonic increasing,
      the incorrect test could result in a bank not being truncated when it
      should be.
      
      This patch ensures we perform the va-pa conversion on memory from the
      bank we are interested in, not the following one.
      Reported-by: N??? (Steve) <zhanzhenbo@gmail.com>
      Signed-off-by: NJonathan Austin <jonathan.austin@arm.com>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      36418c51
    • W
      ARM: 7502/1: contextidr: avoid using bfi instruction during notifier · ae3790b8
      Will Deacon 提交于
      The bfi instruction is not available on ARMv6, so instead use an and/orr
      sequence in the contextidr_notifier. This gets rid of the assembler
      error:
      
        Assembler messages:
        Error: selected processor does not support ARM mode `bfi r3,r2,#0,#8'
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ae3790b8
    • R
      ARM: Fix ioremap() of address zero · a849088a
      Russell King 提交于
      Murali Nalajala reports a regression that ioremapping address zero
      results in an oops dump:
      
      Unable to handle kernel paging request at virtual address fa200000
      pgd = d4f80000
      [fa200000] *pgd=00000000
      Internal error: Oops: 5 [#1] PREEMPT SMP ARM
      Modules linked in:
      CPU: 0    Tainted: G        W (3.4.0-g3b5f728-00009-g638207a #13)
      PC is at msm_pm_config_rst_vector_before_pc+0x8/0x30
      LR is at msm_pm_boot_config_before_pc+0x18/0x20
      pc : [<c0078f84>]    lr : [<c007903c>]    psr: a0000093
      sp : c0837ef0  ip : cfe00000  fp : 0000000d
      r10: da7efc17  r9 : 225c4278  r8 : 00000006
      r7 : 0003c000  r6 : c085c824  r5 : 00000001  r4 : fa101000
      r3 : fa200000  r2 : c095080c  r1 : 002250fc  r0 : 00000000
      Flags: NzCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM Segment kernel
      Control: 10c5387d  Table: 25180059  DAC: 00000015
      [<c0078f84>] (msm_pm_config_rst_vector_before_pc+0x8/0x30) from [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20)
      [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) from [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04)
      [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) from [<c007b17c>] (arch_idle+0x294/0x3e0)
      [<c007b17c>] (arch_idle+0x294/0x3e0) from [<c000eed8>] (default_idle+0x18/0x2c)
      [<c000eed8>] (default_idle+0x18/0x2c) from [<c000f254>] (cpu_idle+0x90/0xe4)
      [<c000f254>] (cpu_idle+0x90/0xe4) from [<c057231c>] (rest_init+0x88/0xa0)
      [<c057231c>] (rest_init+0x88/0xa0) from [<c07ff890>] (start_kernel+0x3a8/0x40c)
      Code: c0704256 e12fff1e e59f2020 e5923000 (e5930000)
      
      This is caused by the 'reserved' entries which we insert (see
      19b52abe - ARM: 7438/1: fill possible PMD empty section gaps)
      which get matched for physical address zero.
      
      Resolve this by marking these reserved entries with a different flag.
      
      Cc: <stable@vger.kernel.org>
      Tested-by: NMurali Nalajala <mnalajal@codeaurora.org>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a849088a
  5. 11 8月, 2012 2 次提交
  6. 09 8月, 2012 3 次提交
    • A
      ARM: dma-mapping: fix incorrect freeing of atomic allocations · d9e0d149
      Aaro Koskinen 提交于
      Commit e9da6e99 (ARM: dma-mapping:
      remove custom consistent dma region) changed the way atomic allocations
      are handled. However, arm_dma_free() was not modified accordingly, and
      as a result freeing of atomic allocations does not work correctly when
      CMA is disabled. Memory is leaked and following WARNINGs are seen:
      
      [   57.698911] ------------[ cut here ]------------
      [   57.753518] WARNING: at arch/arm/mm/dma-mapping.c:263 arm_dma_free+0x88/0xe4()
      [   57.811473] trying to free invalid coherent area: e0848000
      [   57.867398] Modules linked in: sata_mv(-)
      [   57.921373] [<c000d270>] (unwind_backtrace+0x0/0xf0) from [<c0015430>] (warn_slowpath_common+0x50/0x68)
      [   58.033924] [<c0015430>] (warn_slowpath_common+0x50/0x68) from [<c00154dc>] (warn_slowpath_fmt+0x30/0x40)
      [   58.152024] [<c00154dc>] (warn_slowpath_fmt+0x30/0x40) from [<c000dc18>] (arm_dma_free+0x88/0xe4)
      [   58.219592] [<c000dc18>] (arm_dma_free+0x88/0xe4) from [<c008fa30>] (dma_pool_destroy+0x100/0x148)
      [   58.345526] [<c008fa30>] (dma_pool_destroy+0x100/0x148) from [<c019a64c>] (release_nodes+0x144/0x218)
      [   58.475782] [<c019a64c>] (release_nodes+0x144/0x218) from [<c0197e10>] (__device_release_driver+0x60/0xb8)
      [   58.614260] [<c0197e10>] (__device_release_driver+0x60/0xb8) from [<c0198608>] (driver_detach+0xd8/0xec)
      [   58.756527] [<c0198608>] (driver_detach+0xd8/0xec) from [<c0197c54>] (bus_remove_driver+0x7c/0xc4)
      [   58.901648] [<c0197c54>] (bus_remove_driver+0x7c/0xc4) from [<c004bfac>] (sys_delete_module+0x19c/0x220)
      [   59.051447] [<c004bfac>] (sys_delete_module+0x19c/0x220) from [<c0009140>] (ret_fast_syscall+0x0/0x2c)
      [   59.207996] ---[ end trace 0745420412c0325a ]---
      [   59.287110] ------------[ cut here ]------------
      [   59.366324] WARNING: at arch/arm/mm/dma-mapping.c:263 arm_dma_free+0x88/0xe4()
      [   59.450511] trying to free invalid coherent area: e0847000
      [   59.534357] Modules linked in: sata_mv(-)
      [   59.616785] [<c000d270>] (unwind_backtrace+0x0/0xf0) from [<c0015430>] (warn_slowpath_common+0x50/0x68)
      [   59.790030] [<c0015430>] (warn_slowpath_common+0x50/0x68) from [<c00154dc>] (warn_slowpath_fmt+0x30/0x40)
      [   59.972322] [<c00154dc>] (warn_slowpath_fmt+0x30/0x40) from [<c000dc18>] (arm_dma_free+0x88/0xe4)
      [   60.070701] [<c000dc18>] (arm_dma_free+0x88/0xe4) from [<c008fa30>] (dma_pool_destroy+0x100/0x148)
      [   60.256817] [<c008fa30>] (dma_pool_destroy+0x100/0x148) from [<c019a64c>] (release_nodes+0x144/0x218)
      [   60.445201] [<c019a64c>] (release_nodes+0x144/0x218) from [<c0197e10>] (__device_release_driver+0x60/0xb8)
      [   60.634148] [<c0197e10>] (__device_release_driver+0x60/0xb8) from [<c0198608>] (driver_detach+0xd8/0xec)
      [   60.823623] [<c0198608>] (driver_detach+0xd8/0xec) from [<c0197c54>] (bus_remove_driver+0x7c/0xc4)
      [   61.013268] [<c0197c54>] (bus_remove_driver+0x7c/0xc4) from [<c004bfac>] (sys_delete_module+0x19c/0x220)
      [   61.203472] [<c004bfac>] (sys_delete_module+0x19c/0x220) from [<c0009140>] (ret_fast_syscall+0x0/0x2c)
      [   61.393390] ---[ end trace 0745420412c0325b ]---
      
      The patch fixes this.
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      d9e0d149
    • A
      ARM: dma-mapping: fix atomic allocation alignment · e4ea6918
      Aaro Koskinen 提交于
      The alignment mask is calculated incorrectly. Fixing the calculation
      makes strange hangs/lockups disappear during the boot with Amstrad E3
      and 3.6-rc1 kernel.
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      e4ea6918
    • C
      ARM: mm: fix MMU mapping of CMA regions · 39f78e70
      Chris Brand 提交于
      Fix dma_contiguous_remap() so that it continues through all the
      regions, even after encountering one that is outside lowmem.
      Without this change, if you have two CMA regions, the first outside
      lowmem and the seocnd inside lowmem, only the second one will get
      set up in the MMU. Data written to that region then doesn't get
      automatically flushed from the cache into memory.
      Signed-off-by: NChris Brand <cbrand@broadcom.com>
      [extended patch subject with 'fix' word]
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      39f78e70
  7. 31 7月, 2012 1 次提交
  8. 30 7月, 2012 6 次提交
  9. 16 7月, 2012 1 次提交
    • P
      ARM: dma-mapping: modify condition check while freeing pages · 46c87852
      Prathyush K 提交于
      WARNING: at mm/vmalloc.c:1471 __iommu_free_buffer+0xcc/0xd0()
      Trying to vfree() nonexistent vm area (ef095000)
      Modules linked in:
      [<c0015a18>] (unwind_backtrace+0x0/0xfc) from [<c0025a94>] (warn_slowpath_common+0x54/0x64)
      [<c0025a94>] (warn_slowpath_common+0x54/0x64) from [<c0025b38>] (warn_slowpath_fmt+0x30/0x40)
      [<c0025b38>] (warn_slowpath_fmt+0x30/0x40) from [<c0016de0>] (__iommu_free_buffer+0xcc/0xd0)
      [<c0016de0>] (__iommu_free_buffer+0xcc/0xd0) from [<c0229a5c>] (exynos_drm_free_buf+0xe4/0x138)
      [<c0229a5c>] (exynos_drm_free_buf+0xe4/0x138) from [<c022b358>] (exynos_drm_gem_destroy+0x80/0xfc)
      [<c022b358>] (exynos_drm_gem_destroy+0x80/0xfc) from [<c0211230>] (drm_gem_object_free+0x28/0x34)
      [<c0211230>] (drm_gem_object_free+0x28/0x34) from [<c0211bd0>] (drm_gem_object_release_handle+0xcc/0xd8)
      [<c0211bd0>] (drm_gem_object_release_handle+0xcc/0xd8) from [<c01abe10>] (idr_for_each+0x74/0xb8)
      [<c01abe10>] (idr_for_each+0x74/0xb8) from [<c02114e4>] (drm_gem_release+0x1c/0x30)
      [<c02114e4>] (drm_gem_release+0x1c/0x30) from [<c0210ae8>] (drm_release+0x608/0x694)
      [<c0210ae8>] (drm_release+0x608/0x694) from [<c00b75a0>] (fput+0xb8/0x228)
      [<c00b75a0>] (fput+0xb8/0x228) from [<c00b40c4>] (filp_close+0x64/0x84)
      [<c00b40c4>] (filp_close+0x64/0x84) from [<c0029d54>] (put_files_struct+0xe8/0x104)
      [<c0029d54>] (put_files_struct+0xe8/0x104) from [<c002b930>] (do_exit+0x608/0x774)
      [<c002b930>] (do_exit+0x608/0x774) from [<c002bae4>] (do_group_exit+0x48/0xb4)
      [<c002bae4>] (do_group_exit+0x48/0xb4) from [<c002bb60>] (sys_exit_group+0x10/0x18)
      [<c002bb60>] (sys_exit_group+0x10/0x18) from [<c000ee80>] (ret_fast_syscall+0x0/0x30)
      
      This patch modifies the condition while freeing to match the condition
      used while allocation. This fixes the above warning which arises when
      array size is equal to PAGE_SIZE where allocation is done using kzalloc
      but free is done using vfree.
      Signed-off-by: NPrathyush K <prathyush.k@samsung.com>
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      46c87852
  10. 10 7月, 2012 2 次提交
  11. 05 7月, 2012 1 次提交
  12. 01 7月, 2012 1 次提交
  13. 29 6月, 2012 1 次提交
  14. 25 6月, 2012 1 次提交
    • M
      ARM: dma-mapping: fix buffer chunk allocation order · 593f4735
      Marek Szyprowski 提交于
      IOMMU-aware dma_alloc_attrs() implementation allocates buffers in
      power-of-two chunks to improve performance and take advantage of large
      page mappings provided by some IOMMU hardware. However current code, due
      to a subtle bug, allocated those chunks in the smallest-to-largest
      order, what completely killed all the advantages of using larger than
      page chunks. If a 4KiB chunk has been mapped as a first chunk, the
      consecutive chunks are not aligned correctly to the power-of-two which
      match their size and IOMMU drivers were not able to use internal
      mappings of size other than the 4KiB (largest common denominator of
      alignment and chunk size).
      
      This patch fixes this issue by changing to the correct largest-to-smallest
      chunk size allocation sequence.
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      593f4735
  15. 11 6月, 2012 2 次提交
  16. 04 6月, 2012 1 次提交
  17. 21 5月, 2012 7 次提交