1. 29 3月, 2018 7 次提交
  2. 15 3月, 2018 2 次提交
  3. 15 2月, 2018 1 次提交
    • S
      iommu/amd: Avoid locking get_irq_table() from atomic context · df42a04b
      Scott Wood 提交于
      get_irq_table() previously acquired amd_iommu_devtable_lock which is not
      a raw lock, and thus cannot be acquired from atomic context on
      PREEMPT_RT.  Many calls to modify_irte*() come from atomic context due to
      the IRQ desc->lock, as does amd_iommu_update_ga() due to the preemption
      disabling in vcpu_load/put().
      
      The only difference between calling get_irq_table() and reading from
      irq_lookup_table[] directly, other than the lock acquisition and
      amd_iommu_rlookup_table[] check, is if the table entry is unpopulated,
      which should never happen when looking up a devid that came from an
      irq_2_irte struct, as get_irq_table() would have already been called on
      that devid during irq_remapping_alloc().
      
      The lock acquisition is not needed in these cases because entries in
      irq_lookup_table[] never change once non-NULL -- nor would the
      amd_iommu_devtable_lock usage in get_irq_table() provide meaningful
      protection if they did, since it's released before using the looked up
      table in the get_irq_table() caller.
      
      Rename the old get_irq_table() to alloc_irq_table(), and create a new
      lockless get_irq_table() to be used in non-allocating contexts that WARNs
      if it doesn't find what it's looking for.
      Signed-off-by: NScott Wood <swood@redhat.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      df42a04b
  4. 13 2月, 2018 2 次提交
  5. 12 1月, 2018 1 次提交
  6. 30 12月, 2017 1 次提交
    • T
      genirq/irqdomain: Rename early argument of irq_domain_activate_irq() · 702cb0a0
      Thomas Gleixner 提交于
      The 'early' argument of irq_domain_activate_irq() is actually used to
      denote reservation mode. To avoid confusion, rename it before abuse
      happens.
      
      No functional change.
      
      Fixes: 72491643 ("genirq/irqdomain: Update irq_domain_ops.activate() signature")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexandru Chirvasitu <achirvasub@gmail.com>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Dou Liyang <douly.fnst@cn.fujitsu.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Mikael Pettersson <mikpelinux@gmail.com>
      Cc: Josh Poulson <jopoulso@microsoft.com>
      Cc: Mihai Costache <v-micos@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: linux-pci@vger.kernel.org
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Dexuan Cui <decui@microsoft.com>
      Cc: Simon Xiao <sixiao@microsoft.com>
      Cc: Saeed Mahameed <saeedm@mellanox.com>
      Cc: Jork Loeser <Jork.Loeser@microsoft.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: devel@linuxdriverproject.org
      Cc: KY Srinivasan <kys@microsoft.com>
      Cc: Alan Cox <alan@linux.intel.com>
      Cc: Sakari Ailus <sakari.ailus@intel.com>,
      Cc: linux-media@vger.kernel.org
      702cb0a0
  7. 21 12月, 2017 2 次提交
  8. 04 11月, 2017 3 次提交
  9. 13 10月, 2017 1 次提交
  10. 12 10月, 2017 1 次提交
    • T
      iommu/iova: Make rcache flush optional on IOVA allocation failure · 538d5b33
      Tomasz Nowicki 提交于
      Since IOVA allocation failure is not unusual case we need to flush
      CPUs' rcache in hope we will succeed in next round.
      
      However, it is useful to decide whether we need rcache flush step because
      of two reasons:
      - Not scalability. On large system with ~100 CPUs iterating and flushing
        rcache for each CPU becomes serious bottleneck so we may want to defer it.
      - free_cpu_cached_iovas() does not care about max PFN we are interested in.
        Thus we may flush our rcaches and still get no new IOVA like in the
        commonly used scenario:
      
          if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
              iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
      
          if (!iova)
              iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
      
         1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
            PCI devices a SAC address
         2. alloc_iova() fails due to full 32-bit space
         3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
            throws entries away for nothing and alloc_iova() fails again
         4. Next alloc_iova_fast() call cannot take advantage of rcache since we
            have just defeated caches. In this case we pick the slowest option
            to proceed.
      
      This patch reworks flushed_rcache local flag to be additional function
      argument instead and control rcache flush step. Also, it updates all users
      to do the flush as the last chance.
      Signed-off-by: NTomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Tested-by: NNate Watterson <nwatters@codeaurora.org>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      538d5b33
  11. 11 10月, 2017 1 次提交
    • T
      iommu/amd: Do not disable SWIOTLB if SME is active · aba2d9a6
      Tom Lendacky 提交于
      When SME memory encryption is active it will rely on SWIOTLB to handle
      DMA for devices that cannot support the addressing requirements of
      having the encryption mask set in the physical address.  The IOMMU
      currently disables SWIOTLB if it is not running in passthrough mode.
      This is not desired as non-PCI devices attempting DMA may fail. Update
      the code to check if SME is active and not disable SWIOTLB.
      
      Fixes: 2543a786 ("iommu/amd: Allow the AMD IOMMU to work with memory encryption")
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      aba2d9a6
  12. 10 10月, 2017 2 次提交
  13. 27 9月, 2017 1 次提交
  14. 26 9月, 2017 2 次提交
    • T
      iommu/amd: Reevaluate vector configuration on activate() · 5ba204a1
      Thomas Gleixner 提交于
      With the upcoming reservation/management scheme, early activation will
      assign a special vector. The final activation at request_irq() assigns a
      real vector, which needs to be updated in the tables.
      
      Split out the reconfiguration code in set_affinity and use it for
      reactivation.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: iommu@lists.linux-foundation.org
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213155.944883733@linutronix.de
      5ba204a1
    • T
      genirq/irqdomain: Update irq_domain_ops.activate() signature · 72491643
      Thomas Gleixner 提交于
      The irq_domain_ops.activate() callback has no return value and no way to
      tell the function that the activation is early.
      
      The upcoming changes to support a reservation scheme which allows to assign
      interrupt vectors on x86 only when the interrupt is actually requested
      requires:
      
        - A return value, so activation can fail at request_irq() time
        
        - Information that the activate invocation is early, i.e. before
          request_irq().
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NYu Chen <yu.c.chen@intel.com>
      Acked-by: NJuergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Link: https://lkml.kernel.org/r/20170913213152.848490816@linutronix.de
      72491643
  15. 28 8月, 2017 2 次提交
  16. 16 8月, 2017 7 次提交
  17. 25 7月, 2017 1 次提交
  18. 18 7月, 2017 1 次提交
    • T
      iommu/amd: Allow the AMD IOMMU to work with memory encryption · 2543a786
      Tom Lendacky 提交于
      The IOMMU is programmed with physical addresses for the various tables
      and buffers that are used to communicate between the device and the
      driver. When the driver allocates this memory it is encrypted. In order
      for the IOMMU to access the memory as encrypted the encryption mask needs
      to be included in these physical addresses during configuration.
      
      The PTE entries created by the IOMMU should also include the encryption
      mask so that when the device behind the IOMMU performs a DMA, the DMA
      will be performed to encrypted memory.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NJoerg Roedel <jroedel@suse.de>
      Cc: <iommu@lists.linux-foundation.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Toshimitsu Kani <toshi.kani@hpe.com>
      Cc: kasan-dev@googlegroups.com
      Cc: kvm@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-efi@vger.kernel.org
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/3053631ea25ba8b1601c351cb7c541c496f6d9bc.1500319216.git.thomas.lendacky@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2543a786
  19. 28 6月, 2017 2 次提交