1. 30 7月, 2022 1 次提交
    • A
      PCI: Remove pci_mmap_page_range() wrapper · 0ad722f1
      Arnd Bergmann 提交于
      The ARCH_GENERIC_PCI_MMAP_RESOURCE symbol came up in a recent discussion,
      and I noticed that this was left behind by an unfinished cleanup from 2017.
      
      The only architecture that still relies on providing its own
      pci_mmap_page_range() helper instead of using the generic
      pci_mmap_resource_range() is sparc. Presumably the reasons for this have
      not changed, but at least this can be simplified by converting sparc to use
      the same interface as the others.
      
      The only difference between the two is the device-specific offset that gets
      added to or subtracted from vma->vm_pgoff.
      
      Change the only caller of pci_mmap_page_range() in common code to subtract
      this offset and call the modern interface, while adding it back in the
      sparc implementation to preserve the existing behavior.
      
      This removes the complexities of the dual interfaces from the common code,
      and keeps it all specific to the sparc architecture code. According to
      David Miller, the sparc code lets user space poke into the VGA I/O port
      registers by mmapping the I/O space of the parent bridge device, which is
      something that the generic pci_mmap_resource_range() code apparently does
      not.
      
      Link: https://lore.kernel.org/lkml/1519887203.622.3.camel@infradead.org/t/
      Link: https://lore.kernel.org/lkml/20220714214657.2402250-3-shorne@gmail.com/
      Link: https://lore.kernel.org/r/20220715153617.3393420-1-arnd@kernel.orgSigned-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Stafford Horne <shorne@gmail.com>
      0ad722f1
  2. 06 5月, 2022 1 次提交
  3. 28 4月, 2022 1 次提交
    • L
      bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management · 512881ea
      Lu Baolu 提交于
      The devices on platform/amba/fsl-mc/PCI buses could be bound to drivers
      with the device DMA managed by kernel drivers or user-space applications.
      Unfortunately, multiple devices may be placed in the same IOMMU group
      because they cannot be isolated from each other. The DMA on these devices
      must either be entirely under kernel control or userspace control, never
      a mixture. Otherwise the driver integrity is not guaranteed because they
      could access each other through the peer-to-peer accesses which by-pass
      the IOMMU protection.
      
      This checks and sets the default DMA mode during driver binding, and
      cleanups during driver unbinding. In the default mode, the device DMA is
      managed by the device driver which handles DMA operations through the
      kernel DMA APIs (see Documentation/core-api/dma-api.rst).
      
      For cases where the devices are assigned for userspace control through the
      userspace driver framework(i.e. VFIO), the drivers(for example, vfio_pci/
      vfio_platfrom etc.) may set a new flag (driver_managed_dma) to skip this
      default setting in the assumption that the drivers know what they are
      doing with the device DMA.
      
      Calling iommu_device_use_default_domain() before {of,acpi}_dma_configure
      is currently a problem. As things stand, the IOMMU driver ignored the
      initial iommu_probe_device() call when the device was added, since at
      that point it had no fwspec yet. In this situation,
      {of,acpi}_iommu_configure() are retriggering iommu_probe_device() after
      the IOMMU driver has seen the firmware data via .of_xlate to learn that
      it actually responsible for the given device. As the result, before
      that gets fixed, iommu_use_default_domain() goes at the end, and calls
      arch_teardown_dma_ops() if it fails.
      
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Stuart Yoder <stuyoder@gmail.com>
      Cc: Laurentiu Tudor <laurentiu.tudor@nxp.com>
      Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: NJason Gunthorpe <jgg@nvidia.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Tested-by: NEric Auger <eric.auger@redhat.com>
      Link: https://lore.kernel.org/r/20220418005000.897664-5-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      512881ea
  4. 22 4月, 2022 1 次提交
  5. 30 3月, 2022 1 次提交
  6. 05 3月, 2022 1 次提交
  7. 27 2月, 2022 2 次提交
    • J
      PCI/IOV: Add pci_iov_get_pf_drvdata() to allow VF reaching the drvdata of a PF · a7e9f240
      Jason Gunthorpe 提交于
      There are some cases where a SR-IOV VF driver will need to reach into and
      interact with the PF driver. This requires accessing the drvdata of the PF.
      
      Provide a function pci_iov_get_pf_drvdata() to return this PF drvdata in a
      safe way. Normally accessing a drvdata of a foreign struct device would be
      done using the device_lock() to protect against device driver
      probe()/remove() races.
      
      However, due to the design of pci_enable_sriov() this will result in a
      ABBA deadlock on the device_lock as the PF's device_lock is held during PF
      sriov_configure() while calling pci_enable_sriov() which in turn holds the
      VF's device_lock while calling VF probe(), and similarly for remove.
      
      This means the VF driver can never obtain the PF's device_lock.
      
      Instead use the implicit locking created by pci_enable/disable_sriov(). A
      VF driver can access its PF drvdata only while its own driver is attached,
      and the PF driver can control access to its own drvdata based on when it
      calls pci_enable/disable_sriov().
      
      To use this API the PF driver will setup the PF drvdata in the probe()
      function. pci_enable_sriov() is only called from sriov_configure() which
      cannot happen until probe() completes, ensuring no VF races with drvdata
      setup.
      
      For removal, the PF driver must call pci_disable_sriov() in its remove
      function before destroying any of the drvdata. This ensures that all VF
      drivers are unbound before returning, fencing concurrent access to the
      drvdata.
      
      The introduction of a new function to do this access makes clear the
      special locking scheme and the documents the requirements on the PF/VF
      drivers using this.
      
      Link: https://lore.kernel.org/all/20220224142024.147653-5-yishaih@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NYishai Hadas <yishaih@nvidia.com>
      Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
      a7e9f240
    • J
      PCI/IOV: Add pci_iov_vf_id() to get VF index · 21ca9fb6
      Jason Gunthorpe 提交于
      The PCI core uses the VF index internally, often called the vf_id,
      during the setup of the VF, eg pci_iov_add_virtfn().
      
      This index is needed for device drivers that implement live migration
      for their internal operations that configure/control their VFs.
      
      Specifically, mlx5_vfio_pci driver that is introduced in coming patches
      from this series needs it and not the bus/device/function which is
      exposed today.
      
      Add pci_iov_vf_id() which computes the vf_id by reversing the math that
      was used to create the bus/device/function.
      
      Link: https://lore.kernel.org/all/20220224142024.147653-2-yishaih@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NYishai Hadas <yishaih@nvidia.com>
      Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
      21ca9fb6
  8. 11 1月, 2022 1 次提交
  9. 18 12月, 2021 1 次提交
  10. 17 12月, 2021 2 次提交
  11. 09 12月, 2021 2 次提交
  12. 19 11月, 2021 1 次提交
  13. 12 11月, 2021 1 次提交
  14. 11 11月, 2021 1 次提交
  15. 08 11月, 2021 1 次提交
  16. 30 10月, 2021 1 次提交
  17. 18 10月, 2021 2 次提交
  18. 13 10月, 2021 1 次提交
    • B
      PCI: Return NULL for to_pci_driver(NULL) · 8e9028b3
      Bjorn Helgaas 提交于
      to_pci_driver() takes a pointer to a struct device_driver and uses
      container_of() to find the struct pci_driver that contains it.
      
      If given a NULL pointer to a struct device_driver, return a NULL pci_driver
      pointer instead of applying container_of() to NULL.
      
      This simplifies callers that would otherwise have to check for a NULL
      pointer first.
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      8e9028b3
  19. 12 10月, 2021 1 次提交
  20. 22 9月, 2021 1 次提交
  21. 01 9月, 2021 4 次提交
  22. 27 8月, 2021 3 次提交
  23. 25 8月, 2021 2 次提交
  24. 23 8月, 2021 1 次提交
    • B
      PCI: Introduce domain_nr in pci_host_bridge · 15d82ca2
      Boqun Feng 提交于
      Currently we retrieve the PCI domain number of the host bridge from the
      bus sysdata (or pci_config_window if PCI_DOMAINS_GENERIC=y). Actually
      we have the information at PCI host bridge probing time, and it makes
      sense that we store it into pci_host_bridge. One benefit of doing so is
      the requirement for supporting PCI on Hyper-V for ARM64, because the
      host bridge of Hyper-V doesn't have pci_config_window, whereas ARM64 is
      a PCI_DOMAINS_GENERIC=y arch, so we cannot retrieve the PCI domain
      number from pci_config_window on ARM64 Hyper-V guest.
      
      As the preparation for ARM64 Hyper-V PCI support, we introduce the
      domain_nr in pci_host_bridge and a sentinel value to allow drivers to
      set domain numbers properly at probing time. Currently
      CONFIG_PCI_DOMAINS_GENERIC=y archs are only users of this
      newly-introduced field.
      
      Link: https://lore.kernel.org/r/20210726180657.142727-2-boqun.feng@gmail.comSigned-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      15d82ca2
  25. 21 8月, 2021 6 次提交