1. 23 12月, 2019 2 次提交
    • Q
      iommu/iova: Silence warnings under memory pressure · 944c9175
      Qian Cai 提交于
      When running heavy memory pressure workloads, this 5+ old system is
      throwing endless warnings below because disk IO is too slow to recover
      from swapping. Since the volume from alloc_iova_fast() could be large,
      once it calls printk(), it will trigger disk IO (writing to the log
      files) and pending softirqs which could cause an infinite loop and make
      no progress for days by the ongoimng memory reclaim. This is the counter
      part for Intel where the AMD part has already been merged. See the
      commit 3d708895 ("iommu/amd: Silence warnings under memory
      pressure"). Since the allocation failure will be reported in
      intel_alloc_iova(), so just call dev_err_once() there because even the
      "ratelimited" is too much, and silence the one in alloc_iova_mem() to
      avoid the expensive warn_alloc().
      
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       slab_out_of_memory: 66 callbacks suppressed
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: iommu_iova, object size: 40, buffer size: 448, default order:
      0, min order: 0
         node 0: slabs: 1822, objs: 16398, free: 0
         node 1: slabs: 2051, objs: 18459, free: 31
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: iommu_iova, object size: 40, buffer size: 448, default order:
      0, min order: 0
         node 0: slabs: 1822, objs: 16398, free: 0
         node 1: slabs: 2051, objs: 18459, free: 31
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: iommu_iova, object size: 40, buffer size: 448, default order:
      0, min order: 0
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 1: slabs: 381, objs: 2286, free: 27
         node 1: slabs: 381, objs: 2286, free: 27
         node 1: slabs: 381, objs: 2286, free: 27
         node 1: slabs: 381, objs: 2286, free: 27
         node 0: slabs: 1822, objs: 16398, free: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 1: slabs: 2051, objs: 18459, free: 31
         node 0: slabs: 697, objs: 4182, free: 0
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         node 1: slabs: 381, objs: 2286, free: 27
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 1: slabs: 381, objs: 2286, free: 27
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       warn_alloc: 96 callbacks suppressed
       kworker/11:1H: page allocation failure: order:0,
      mode:0xa20(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0-1
       CPU: 11 PID: 1642 Comm: kworker/11:1H Tainted: G    B
       Hardware name: HP ProLiant XL420 Gen9/ProLiant XL420 Gen9, BIOS U19
      12/27/2015
       Workqueue: kblockd blk_mq_run_work_fn
       Call Trace:
        dump_stack+0xa0/0xea
        warn_alloc.cold.94+0x8a/0x12d
        __alloc_pages_slowpath+0x1750/0x1870
        __alloc_pages_nodemask+0x58a/0x710
        alloc_pages_current+0x9c/0x110
        alloc_slab_page+0xc9/0x760
        allocate_slab+0x48f/0x5d0
        new_slab+0x46/0x70
        ___slab_alloc+0x4ab/0x7b0
        __slab_alloc+0x43/0x70
        kmem_cache_alloc+0x2dd/0x450
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
        alloc_iova+0x33/0x210
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 0: slabs: 697, objs: 4182, free: 0
        alloc_iova_fast+0x62/0x3d1
         node 1: slabs: 381, objs: 2286, free: 27
        intel_alloc_iova+0xce/0xe0
        intel_map_sg+0xed/0x410
        scsi_dma_map+0xd7/0x160
        scsi_queue_rq+0xbf7/0x1310
        blk_mq_dispatch_rq_list+0x4d9/0xbc0
        blk_mq_sched_dispatch_requests+0x24a/0x300
        __blk_mq_run_hw_queue+0x156/0x230
        blk_mq_run_work_fn+0x3b/0x40
        process_one_work+0x579/0xb90
        worker_thread+0x63/0x5b0
        kthread+0x1e6/0x210
        ret_from_fork+0x3a/0x50
       Mem-Info:
       active_anon:2422723 inactive_anon:361971 isolated_anon:34403
        active_file:2285 inactive_file:1838 isolated_file:0
        unevictable:0 dirty:1 writeback:5 unstable:0
        slab_reclaimable:13972 slab_unreclaimable:453879
        mapped:2380 shmem:154 pagetables:6948 bounce:0
        free:19133 free_pcp:7363 free_cma:0
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      944c9175
    • K
      iommu: Fix Kconfig indentation · d0432345
      Krzysztof Kozlowski 提交于
      Adjust indentation from spaces to tab (+optional two spaces) as in
      coding style with command like:
      	$ sed -e 's/^        /\t/' -i */Kconfig
      Signed-off-by: NKrzysztof Kozlowski <krzk@kernel.org>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      d0432345
  2. 19 12月, 2019 1 次提交
  3. 18 12月, 2019 1 次提交
  4. 17 12月, 2019 7 次提交
  5. 22 11月, 2019 2 次提交
    • B
      drivers: iommu: hyperv: Make HYPERV_IOMMU only available on x86 · d7f0b2e4
      Boqun Feng 提交于
      Currently hyperv-iommu is implemented in a x86 specific way, for
      example, apic is used. So make the HYPERV_IOMMU Kconfig depend on X86
      as a preparation for enabling HyperV on architecture other than x86.
      
      Cc: Lan Tianyu <Tianyu.Lan@microsoft.com>
      Cc: Michael Kelley <mikelley@microsoft.com>
      Cc: linux-hyperv@vger.kernel.org
      Signed-off-by: NBoqun Feng (Microsoft) <boqun.feng@gmail.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      d7f0b2e4
    • N
      dma-mapping: treat dev->bus_dma_mask as a DMA limit · a7ba70f1
      Nicolas Saenz Julienne 提交于
      Using a mask to represent bus DMA constraints has a set of limitations.
      The biggest one being it can only hold a power of two (minus one). The
      DMA mapping code is already aware of this and treats dev->bus_dma_mask
      as a limit. This quirk is already used by some architectures although
      still rare.
      
      With the introduction of the Raspberry Pi 4 we've found a new contender
      for the use of bus DMA limits, as its PCIe bus can only address the
      lower 3GB of memory (of a total of 4GB). This is impossible to represent
      with a mask. To make things worse the device-tree code rounds non power
      of two bus DMA limits to the next power of two, which is unacceptable in
      this case.
      
      In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all
      over the tree and treat it as such. Note that dev->bus_dma_limit should
      contain the higher accessible DMA address.
      Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a7ba70f1
  6. 21 11月, 2019 2 次提交
  7. 13 11月, 2019 1 次提交
  8. 11 11月, 2019 17 次提交
  9. 07 11月, 2019 1 次提交
    • W
      iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve doc · dd5ddd3c
      Will Deacon 提交于
      The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all
      users of the IOMMU API. Despite its name, the idea behind it isn't
      especially tied to Qualcomm implementations and could conceivably be
      used by other systems.
      
      Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe
      a bit better the idea behind it.
      
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      dd5ddd3c
  10. 05 11月, 2019 6 次提交