1. 27 2月, 2022 2 次提交
    • J
      PCI/IOV: Add pci_iov_get_pf_drvdata() to allow VF reaching the drvdata of a PF · a7e9f240
      Jason Gunthorpe 提交于
      There are some cases where a SR-IOV VF driver will need to reach into and
      interact with the PF driver. This requires accessing the drvdata of the PF.
      
      Provide a function pci_iov_get_pf_drvdata() to return this PF drvdata in a
      safe way. Normally accessing a drvdata of a foreign struct device would be
      done using the device_lock() to protect against device driver
      probe()/remove() races.
      
      However, due to the design of pci_enable_sriov() this will result in a
      ABBA deadlock on the device_lock as the PF's device_lock is held during PF
      sriov_configure() while calling pci_enable_sriov() which in turn holds the
      VF's device_lock while calling VF probe(), and similarly for remove.
      
      This means the VF driver can never obtain the PF's device_lock.
      
      Instead use the implicit locking created by pci_enable/disable_sriov(). A
      VF driver can access its PF drvdata only while its own driver is attached,
      and the PF driver can control access to its own drvdata based on when it
      calls pci_enable/disable_sriov().
      
      To use this API the PF driver will setup the PF drvdata in the probe()
      function. pci_enable_sriov() is only called from sriov_configure() which
      cannot happen until probe() completes, ensuring no VF races with drvdata
      setup.
      
      For removal, the PF driver must call pci_disable_sriov() in its remove
      function before destroying any of the drvdata. This ensures that all VF
      drivers are unbound before returning, fencing concurrent access to the
      drvdata.
      
      The introduction of a new function to do this access makes clear the
      special locking scheme and the documents the requirements on the PF/VF
      drivers using this.
      
      Link: https://lore.kernel.org/all/20220224142024.147653-5-yishaih@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NYishai Hadas <yishaih@nvidia.com>
      Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
      a7e9f240
    • J
      PCI/IOV: Add pci_iov_vf_id() to get VF index · 21ca9fb6
      Jason Gunthorpe 提交于
      The PCI core uses the VF index internally, often called the vf_id,
      during the setup of the VF, eg pci_iov_add_virtfn().
      
      This index is needed for device drivers that implement live migration
      for their internal operations that configure/control their VFs.
      
      Specifically, mlx5_vfio_pci driver that is introduced in coming patches
      from this series needs it and not the bus/device/function which is
      exposed today.
      
      Add pci_iov_vf_id() which computes the vf_id by reversing the math that
      was used to create the bus/device/function.
      
      Link: https://lore.kernel.org/all/20220224142024.147653-2-yishaih@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NYishai Hadas <yishaih@nvidia.com>
      Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
      21ca9fb6
  2. 11 1月, 2022 1 次提交
  3. 18 12月, 2021 1 次提交
  4. 17 12月, 2021 2 次提交
  5. 09 12月, 2021 2 次提交
  6. 19 11月, 2021 1 次提交
  7. 12 11月, 2021 1 次提交
  8. 11 11月, 2021 1 次提交
  9. 08 11月, 2021 1 次提交
  10. 30 10月, 2021 1 次提交
  11. 18 10月, 2021 2 次提交
  12. 13 10月, 2021 1 次提交
    • B
      PCI: Return NULL for to_pci_driver(NULL) · 8e9028b3
      Bjorn Helgaas 提交于
      to_pci_driver() takes a pointer to a struct device_driver and uses
      container_of() to find the struct pci_driver that contains it.
      
      If given a NULL pointer to a struct device_driver, return a NULL pci_driver
      pointer instead of applying container_of() to NULL.
      
      This simplifies callers that would otherwise have to check for a NULL
      pointer first.
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      8e9028b3
  13. 12 10月, 2021 1 次提交
  14. 22 9月, 2021 1 次提交
  15. 01 9月, 2021 4 次提交
  16. 27 8月, 2021 3 次提交
  17. 25 8月, 2021 2 次提交
  18. 23 8月, 2021 1 次提交
    • B
      PCI: Introduce domain_nr in pci_host_bridge · 15d82ca2
      Boqun Feng 提交于
      Currently we retrieve the PCI domain number of the host bridge from the
      bus sysdata (or pci_config_window if PCI_DOMAINS_GENERIC=y). Actually
      we have the information at PCI host bridge probing time, and it makes
      sense that we store it into pci_host_bridge. One benefit of doing so is
      the requirement for supporting PCI on Hyper-V for ARM64, because the
      host bridge of Hyper-V doesn't have pci_config_window, whereas ARM64 is
      a PCI_DOMAINS_GENERIC=y arch, so we cannot retrieve the PCI domain
      number from pci_config_window on ARM64 Hyper-V guest.
      
      As the preparation for ARM64 Hyper-V PCI support, we introduce the
      domain_nr in pci_host_bridge and a sentinel value to allow drivers to
      set domain numbers properly at probing time. Currently
      CONFIG_PCI_DOMAINS_GENERIC=y archs are only users of this
      newly-introduced field.
      
      Link: https://lore.kernel.org/r/20210726180657.142727-2-boqun.feng@gmail.comSigned-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      15d82ca2
  19. 21 8月, 2021 6 次提交
  20. 19 8月, 2021 2 次提交
  21. 18 8月, 2021 4 次提交