1. 12 6月, 2019 1 次提交
    • E
      iommu/vt-d: Duplicate iommu_resv_region objects per device list · 5f64ce54
      Eric Auger 提交于
      intel_iommu_get_resv_regions() aims to return the list of
      reserved regions accessible by a given @device. However several
      devices can access the same reserved memory region and when
      building the list it is not safe to use a single iommu_resv_region
      object, whose container is the RMRR. This iommu_resv_region must
      be duplicated per device reserved region list.
      
      Let's remove the struct iommu_resv_region from the RMRR unit
      and allocate the iommu_resv_region directly in
      intel_iommu_get_resv_regions(). We hold the dmar_global_lock instead
      of the rcu-lock to allow sleeping.
      
      Fixes: 0659b8dc ("iommu/vt-d: Implement reserved region get/put callbacks")
      Signed-off-by: NEric Auger <eric.auger@redhat.com>
      Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      5f64ce54
  2. 03 6月, 2019 1 次提交
  3. 02 6月, 2019 1 次提交
  4. 28 5月, 2019 13 次提交
  5. 27 5月, 2019 3 次提交
  6. 03 5月, 2019 3 次提交
  7. 30 4月, 2019 1 次提交
  8. 26 4月, 2019 1 次提交
    • L
      iommu/vt-d: Don't request page request irq under dmar_global_lock · a7755c3c
      Lu Baolu 提交于
      Requesting page reqest irq under dmar_global_lock could cause
      potential lock race condition (caught by lockdep).
      
      [    4.100055] ======================================================
      [    4.100063] WARNING: possible circular locking dependency detected
      [    4.100072] 5.1.0-rc4+ #2169 Not tainted
      [    4.100078] ------------------------------------------------------
      [    4.100086] swapper/0/1 is trying to acquire lock:
      [    4.100094] 000000007dcbe3c3 (dmar_lock){+.+.}, at: dmar_alloc_hwirq+0x35/0x140
      [    4.100112] but task is already holding lock:
      [    4.100120] 0000000060bbe946 (dmar_global_lock){++++}, at: intel_iommu_init+0x191/0x1438
      [    4.100136] which lock already depends on the new lock.
      [    4.100146] the existing dependency chain (in reverse order) is:
      [    4.100155]
                     -> #2 (dmar_global_lock){++++}:
      [    4.100169]        down_read+0x44/0xa0
      [    4.100178]        intel_irq_remapping_alloc+0xb2/0x7b0
      [    4.100186]        mp_irqdomain_alloc+0x9e/0x2e0
      [    4.100195]        __irq_domain_alloc_irqs+0x131/0x330
      [    4.100203]        alloc_isa_irq_from_domain.isra.4+0x9a/0xd0
      [    4.100212]        mp_map_pin_to_irq+0x244/0x310
      [    4.100221]        setup_IO_APIC+0x757/0x7ed
      [    4.100229]        x86_late_time_init+0x17/0x1c
      [    4.100238]        start_kernel+0x425/0x4e3
      [    4.100247]        secondary_startup_64+0xa4/0xb0
      [    4.100254]
                     -> #1 (irq_domain_mutex){+.+.}:
      [    4.100265]        __mutex_lock+0x7f/0x9d0
      [    4.100273]        __irq_domain_add+0x195/0x2b0
      [    4.100280]        irq_domain_create_hierarchy+0x3d/0x40
      [    4.100289]        msi_create_irq_domain+0x32/0x110
      [    4.100297]        dmar_alloc_hwirq+0x111/0x140
      [    4.100305]        dmar_set_interrupt.part.14+0x1a/0x70
      [    4.100314]        enable_drhd_fault_handling+0x2c/0x6c
      [    4.100323]        apic_bsp_setup+0x75/0x7a
      [    4.100330]        x86_late_time_init+0x17/0x1c
      [    4.100338]        start_kernel+0x425/0x4e3
      [    4.100346]        secondary_startup_64+0xa4/0xb0
      [    4.100352]
                     -> #0 (dmar_lock){+.+.}:
      [    4.100364]        lock_acquire+0xb4/0x1c0
      [    4.100372]        __mutex_lock+0x7f/0x9d0
      [    4.100379]        dmar_alloc_hwirq+0x35/0x140
      [    4.100389]        intel_svm_enable_prq+0x61/0x180
      [    4.100397]        intel_iommu_init+0x1128/0x1438
      [    4.100406]        pci_iommu_init+0x16/0x3f
      [    4.100414]        do_one_initcall+0x5d/0x2be
      [    4.100422]        kernel_init_freeable+0x1f0/0x27c
      [    4.100431]        kernel_init+0xa/0x110
      [    4.100438]        ret_from_fork+0x3a/0x50
      [    4.100444]
                     other info that might help us debug this:
      
      [    4.100454] Chain exists of:
                       dmar_lock --> irq_domain_mutex --> dmar_global_lock
      [    4.100469]  Possible unsafe locking scenario:
      
      [    4.100476]        CPU0                    CPU1
      [    4.100483]        ----                    ----
      [    4.100488]   lock(dmar_global_lock);
      [    4.100495]                                lock(irq_domain_mutex);
      [    4.100503]                                lock(dmar_global_lock);
      [    4.100512]   lock(dmar_lock);
      [    4.100518]
                      *** DEADLOCK ***
      
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
      Cc: Kevin Tian <kevin.tian@intel.com>
      Reported-by: NDave Jiang <dave.jiang@intel.com>
      Fixes: a222a7f0 ("iommu/vt-d: Implement page request handling")
      Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      a7755c3c
  9. 12 4月, 2019 1 次提交
  10. 11 4月, 2019 8 次提交
  11. 22 3月, 2019 2 次提交
  12. 06 3月, 2019 1 次提交
    • A
      mm: replace all open encodings for NUMA_NO_NODE · 98fa15f3
      Anshuman Khandual 提交于
      Patch series "Replace all open encodings for NUMA_NO_NODE", v3.
      
      All these places for replacement were found by running the following
      grep patterns on the entire kernel code.  Please let me know if this
      might have missed some instances.  This might also have replaced some
      false positives.  I will appreciate suggestions, inputs and review.
      
      1. git grep "nid == -1"
      2. git grep "node == -1"
      3. git grep "nid = -1"
      4. git grep "node = -1"
      
      This patch (of 2):
      
      At present there are multiple places where invalid node number is
      encoded as -1.  Even though implicitly understood it is always better to
      have macros in there.  Replace these open encodings for an invalid node
      number with the global macro NUMA_NO_NODE.  This helps remove NUMA
      related assumptions like 'invalid node' from various places redirecting
      them to a common definition.
      
      Link: http://lkml.kernel.org/r/1545127933-10711-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>	[ixgbe]
      Acked-by: Jens Axboe <axboe@kernel.dk>			[mtip32xx]
      Acked-by: Vinod Koul <vkoul@kernel.org>			[dmaengine.c]
      Acked-by: Michael Ellerman <mpe@ellerman.id.au>		[powerpc]
      Acked-by: Doug Ledford <dledford@redhat.com>		[drivers/infiniband]
      Cc: Joseph Qi <jiangqi903@gmail.com>
      Cc: Hans Verkuil <hverkuil@xs4all.nl>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      98fa15f3
  13. 01 3月, 2019 2 次提交
  14. 26 2月, 2019 2 次提交