1. 01 5月, 2020 1 次提交
  2. 19 3月, 2020 1 次提交
    • Q
      iommu/vt-d: Silence RCU-list debugging warning in dmar_find_atsr() · c6f4ebde
      Qian Cai 提交于
      dmar_find_atsr() calls list_for_each_entry_rcu() outside of an RCU read
      side critical section but with dmar_global_lock held. Silence this
      false positive.
      
       drivers/iommu/intel-iommu.c:4504 RCU-list traversed in non-reader section!!
       1 lock held by swapper/0/1:
       #0: ffffffff9755bee8 (dmar_global_lock){+.+.}, at: intel_iommu_init+0x1a6/0xe19
      
       Call Trace:
        dump_stack+0xa4/0xfe
        lockdep_rcu_suspicious+0xeb/0xf5
        dmar_find_atsr+0x1ab/0x1c0
        dmar_parse_one_atsr+0x64/0x220
        dmar_walk_remapping_entries+0x130/0x380
        dmar_table_init+0x166/0x243
        intel_iommu_init+0x1ab/0xe19
        pci_iommu_init+0x1a/0x44
        do_one_initcall+0xae/0x4d0
        kernel_init_freeable+0x412/0x4c5
        kernel_init+0x19/0x193
      Signed-off-by: NQian Cai <cai@lca.pw>
      Acked-by: NLu Baolu <baolu.lu@linux.intel.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      c6f4ebde
  3. 15 3月, 2020 1 次提交
  4. 13 3月, 2020 2 次提交
  5. 10 3月, 2020 1 次提交
    • Q
      iommu/vt-d: Fix RCU-list bugs in intel_iommu_init() · 2d48ea0e
      Qian Cai 提交于
      There are several places traverse RCU-list without holding any lock in
      intel_iommu_init(). Fix them by acquiring dmar_global_lock.
      
       WARNING: suspicious RCU usage
       -----------------------------
       drivers/iommu/intel-iommu.c:5216 RCU-list traversed in non-reader section!!
      
       other info that might help us debug this:
      
       rcu_scheduler_active = 2, debug_locks = 1
       no locks held by swapper/0/1.
      
       Call Trace:
        dump_stack+0xa0/0xea
        lockdep_rcu_suspicious+0x102/0x10b
        intel_iommu_init+0x947/0xb13
        pci_iommu_init+0x26/0x62
        do_one_initcall+0xfe/0x500
        kernel_init_freeable+0x45a/0x4f8
        kernel_init+0x11/0x139
        ret_from_fork+0x3a/0x50
       DMAR: Intel(R) Virtualization Technology for Directed I/O
      
      Fixes: d8190dc6 ("iommu/vt-d: Enable DMA remapping after rmrr mapped")
      Signed-off-by: NQian Cai <cai@lca.pw>
      Acked-by: NLu Baolu <baolu.lu@linux.intel.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      2d48ea0e
  6. 03 3月, 2020 1 次提交
  7. 19 2月, 2020 5 次提交
  8. 25 1月, 2020 2 次提交
  9. 24 1月, 2020 5 次提交
  10. 08 1月, 2020 2 次提交
    • J
      iommu/vt-d: Unlink device if failed to add to group · f78947c4
      Jon Derrick 提交于
      If the device fails to be added to the group, make sure to unlink the
      reference before returning.
      Signed-off-by: NJon Derrick <jonathan.derrick@intel.com>
      Fixes: 39ab9555 ("iommu: Add sysfs bindings for struct iommu_device")
      Acked-by: NLu Baolu <baolu.lu@linux.intel.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      f78947c4
    • P
      iommu/vt-d: Fix adding non-PCI devices to Intel IOMMU · 4a350a0e
      Patrick Steinhardt 提交于
      Starting with commit fa212a97 ("iommu/vt-d: Probe DMA-capable ACPI
      name space devices"), we now probe DMA-capable ACPI name
      space devices. On Dell XPS 13 9343, which has an Intel LPSS platform
      device INTL9C60 enumerated via ACPI, this change leads to the following
      warning:
      
          ------------[ cut here ]------------
          WARNING: CPU: 1 PID: 1 at pci_device_group+0x11a/0x130
          CPU: 1 PID: 1 Comm: swapper/0 Tainted: G                T 5.5.0-rc3+ #22
          Hardware name: Dell Inc. XPS 13 9343/0310JH, BIOS A20 06/06/2019
          RIP: 0010:pci_device_group+0x11a/0x130
          Code: f0 ff ff 48 85 c0 49 89 c4 75 c4 48 8d 74 24 10 48 89 ef e8 48 ef ff ff 48 85 c0 49 89 c4 75 af e8 db f7 ff ff 49 89 c4 eb a5 <0f> 0b 49 c7 c4 ea ff ff ff eb 9a e8 96 1e c7 ff 66 0f 1f 44 00 00
          RSP: 0000:ffffc0d6c0043cb0 EFLAGS: 00010202
          RAX: 0000000000000000 RBX: ffffa3d1d43dd810 RCX: 0000000000000000
          RDX: ffffa3d1d4fecf80 RSI: ffffa3d12943dcc0 RDI: ffffa3d1d43dd810
          RBP: ffffa3d1d43dd810 R08: 0000000000000000 R09: ffffa3d1d4c04a80
          R10: ffffa3d1d4c00880 R11: ffffa3d1d44ba000 R12: 0000000000000000
          R13: ffffa3d1d4383b80 R14: ffffa3d1d4c090d0 R15: ffffa3d1d4324530
          FS:  0000000000000000(0000) GS:ffffa3d1d6700000(0000) knlGS:0000000000000000
          CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
          CR2: 0000000000000000 CR3: 000000000460a001 CR4: 00000000003606e0
          Call Trace:
           ? iommu_group_get_for_dev+0x81/0x1f0
           ? intel_iommu_add_device+0x61/0x170
           ? iommu_probe_device+0x43/0xd0
           ? intel_iommu_init+0x1fa2/0x2235
           ? pci_iommu_init+0x52/0xe7
           ? e820__memblock_setup+0x15c/0x15c
           ? do_one_initcall+0xcc/0x27e
           ? kernel_init_freeable+0x169/0x259
           ? rest_init+0x95/0x95
           ? kernel_init+0x5/0xeb
           ? ret_from_fork+0x35/0x40
          ---[ end trace 28473e7abc25b92c ]---
          DMAR: ACPI name space devices didn't probe correctly
      
      The bug results from the fact that while we now enumerate ACPI devices,
      we aren't able to handle any non-PCI device when generating the device
      group. Fix the issue by implementing an Intel-specific callback that
      returns `pci_device_group` only if the device is a PCI device.
      Otherwise, it will return a generic device group.
      
      Fixes: fa212a97 ("iommu/vt-d: Probe DMA-capable ACPI name space devices")
      Signed-off-by: NPatrick Steinhardt <ps@pks.im>
      Cc: stable@vger.kernel.org # v5.3+
      Acked-by: NLu Baolu <baolu.lu@linux.intel.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      4a350a0e
  11. 07 1月, 2020 14 次提交
  12. 23 12月, 2019 2 次提交
    • T
      iommu: intel: Use generic_iommu_put_resv_regions() · 0ecdebb7
      Thierry Reding 提交于
      Use the new standard function instead of open-coding it.
      
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      0ecdebb7
    • Q
      iommu/iova: Silence warnings under memory pressure · 944c9175
      Qian Cai 提交于
      When running heavy memory pressure workloads, this 5+ old system is
      throwing endless warnings below because disk IO is too slow to recover
      from swapping. Since the volume from alloc_iova_fast() could be large,
      once it calls printk(), it will trigger disk IO (writing to the log
      files) and pending softirqs which could cause an infinite loop and make
      no progress for days by the ongoimng memory reclaim. This is the counter
      part for Intel where the AMD part has already been merged. See the
      commit 3d708895 ("iommu/amd: Silence warnings under memory
      pressure"). Since the allocation failure will be reported in
      intel_alloc_iova(), so just call dev_err_once() there because even the
      "ratelimited" is too much, and silence the one in alloc_iova_mem() to
      avoid the expensive warn_alloc().
      
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       slab_out_of_memory: 66 callbacks suppressed
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: iommu_iova, object size: 40, buffer size: 448, default order:
      0, min order: 0
         node 0: slabs: 1822, objs: 16398, free: 0
         node 1: slabs: 2051, objs: 18459, free: 31
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: iommu_iova, object size: 40, buffer size: 448, default order:
      0, min order: 0
         node 0: slabs: 1822, objs: 16398, free: 0
         node 1: slabs: 2051, objs: 18459, free: 31
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: iommu_iova, object size: 40, buffer size: 448, default order:
      0, min order: 0
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 1: slabs: 381, objs: 2286, free: 27
         node 1: slabs: 381, objs: 2286, free: 27
         node 1: slabs: 381, objs: 2286, free: 27
         node 1: slabs: 381, objs: 2286, free: 27
         node 0: slabs: 1822, objs: 16398, free: 0
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 1: slabs: 2051, objs: 18459, free: 31
         node 0: slabs: 697, objs: 4182, free: 0
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
         node 1: slabs: 381, objs: 2286, free: 27
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 0: slabs: 697, objs: 4182, free: 0
         node 1: slabs: 381, objs: 2286, free: 27
       hpsa 0000:03:00.0: DMAR: Allocating 1-page iova failed
       warn_alloc: 96 callbacks suppressed
       kworker/11:1H: page allocation failure: order:0,
      mode:0xa20(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0-1
       CPU: 11 PID: 1642 Comm: kworker/11:1H Tainted: G    B
       Hardware name: HP ProLiant XL420 Gen9/ProLiant XL420 Gen9, BIOS U19
      12/27/2015
       Workqueue: kblockd blk_mq_run_work_fn
       Call Trace:
        dump_stack+0xa0/0xea
        warn_alloc.cold.94+0x8a/0x12d
        __alloc_pages_slowpath+0x1750/0x1870
        __alloc_pages_nodemask+0x58a/0x710
        alloc_pages_current+0x9c/0x110
        alloc_slab_page+0xc9/0x760
        allocate_slab+0x48f/0x5d0
        new_slab+0x46/0x70
        ___slab_alloc+0x4ab/0x7b0
        __slab_alloc+0x43/0x70
        kmem_cache_alloc+0x2dd/0x450
       SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
        alloc_iova+0x33/0x210
         cache: skbuff_head_cache, object size: 208, buffer size: 640, default
      order: 0, min order: 0
         node 0: slabs: 697, objs: 4182, free: 0
        alloc_iova_fast+0x62/0x3d1
         node 1: slabs: 381, objs: 2286, free: 27
        intel_alloc_iova+0xce/0xe0
        intel_map_sg+0xed/0x410
        scsi_dma_map+0xd7/0x160
        scsi_queue_rq+0xbf7/0x1310
        blk_mq_dispatch_rq_list+0x4d9/0xbc0
        blk_mq_sched_dispatch_requests+0x24a/0x300
        __blk_mq_run_hw_queue+0x156/0x230
        blk_mq_run_work_fn+0x3b/0x40
        process_one_work+0x579/0xb90
        worker_thread+0x63/0x5b0
        kthread+0x1e6/0x210
        ret_from_fork+0x3a/0x50
       Mem-Info:
       active_anon:2422723 inactive_anon:361971 isolated_anon:34403
        active_file:2285 inactive_file:1838 isolated_file:0
        unevictable:0 dirty:1 writeback:5 unstable:0
        slab_reclaimable:13972 slab_unreclaimable:453879
        mapped:2380 shmem:154 pagetables:6948 bounce:0
        free:19133 free_pcp:7363 free_cma:0
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      944c9175
  13. 17 12月, 2019 3 次提交