1. 30 7月, 2019 11 次提交
  2. 24 7月, 2019 1 次提交
  3. 09 7月, 2019 1 次提交
  4. 22 6月, 2019 1 次提交
  5. 15 6月, 2019 1 次提交
  6. 14 6月, 2019 1 次提交
  7. 13 6月, 2019 1 次提交
  8. 31 5月, 2019 1 次提交
  9. 27 5月, 2019 1 次提交
    • R
      PCI: PM: Avoid possible suspend-to-idle issue · d491f2b7
      Rafael J. Wysocki 提交于
      If a PCI driver leaves the device handled by it in D0 and calls
      pci_save_state() on the device in its ->suspend() or ->suspend_late()
      callback, it can expect the device to stay in D0 over the whole
      s2idle cycle.  However, that may not be the case if there is a
      spurious wakeup while the system is suspended, because in that case
      pci_pm_suspend_noirq() will run again after pci_pm_resume_noirq()
      which calls pci_restore_state(), via pci_pm_default_resume_early(),
      so state_saved is cleared and the second iteration of
      pci_pm_suspend_noirq() will invoke pci_prepare_to_sleep() which
      may change the power state of the device.
      
      To avoid that, add a new internal flag, skip_bus_pm, that will be set
      by pci_pm_suspend_noirq() when it runs for the first time during the
      given system suspend-resume cycle if the state of the device has
      been saved already and the device is still in D0.  Setting that flag
      will cause the next iterations of pci_pm_suspend_noirq() to set
      state_saved for pci_pm_resume_noirq(), so that it always restores the
      device state from the originally saved data, and avoid calling
      pci_prepare_to_sleep() for the device.
      
      Fixes: 33e4f80e ("ACPI / PM: Ignore spurious SCI wakeups from suspend-to-idle")
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      d491f2b7
  10. 07 5月, 2019 1 次提交
  11. 30 4月, 2019 1 次提交
  12. 23 4月, 2019 3 次提交
  13. 18 4月, 2019 1 次提交
  14. 06 4月, 2019 1 次提交
    • S
      PCI: Work around Pericom PCIe-to-PCI bridge Retrain Link erratum · 4ec73791
      Stefan Mätje 提交于
      Due to an erratum in some Pericom PCIe-to-PCI bridges in reverse mode
      (conventional PCI on primary side, PCIe on downstream side), the Retrain
      Link bit needs to be cleared manually to allow the link training to
      complete successfully.
      
      If it is not cleared manually, the link training is continuously restarted
      and no devices below the PCI-to-PCIe bridge can be accessed.  That means
      drivers for devices below the bridge will be loaded but won't work and may
      even crash because the driver is only reading 0xffff.
      
      See the Pericom Errata Sheet PI7C9X111SLB_errata_rev1.2_102711.pdf for
      details.  Devices known as affected so far are: PI7C9X110, PI7C9X111SL,
      PI7C9X130.
      
      Add a new flag, clear_retrain_link, in struct pci_dev.  Quirks for affected
      devices set this bit.
      
      Note that pcie_retrain_link() lives in aspm.c because that's currently the
      only place we use it, but this erratum is not specific to ASPM, and we may
      retrain links for other reasons in the future.
      Signed-off-by: NStefan Mätje <stefan.maetje@esd.eu>
      [bhelgaas: apply regardless of CONFIG_PCIEASPM]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      CC: stable@vger.kernel.org
      4ec73791
  15. 26 2月, 2019 1 次提交
  16. 18 2月, 2019 1 次提交
    • M
      genirq/affinity: Add new callback for (re)calculating interrupt sets · c66d4bd1
      Ming Lei 提交于
      The interrupt affinity spreading mechanism supports to spread out
      affinities for one or more interrupt sets. A interrupt set contains one or
      more interrupts. Each set is mapped to a specific functionality of a
      device, e.g. general I/O queues and read I/O queus of multiqueue block
      devices.
      
      The number of interrupts per set is defined by the driver. It depends on
      the total number of available interrupts for the device, which is
      determined by the PCI capabilites and the availability of underlying CPU
      resources, and the number of queues which the device provides and the
      driver wants to instantiate.
      
      The driver passes initial configuration for the interrupt allocation via a
      pointer to struct irq_affinity.
      
      Right now the allocation mechanism is complex as it requires to have a loop
      in the driver to determine the maximum number of interrupts which are
      provided by the PCI capabilities and the underlying CPU resources.  This
      loop would have to be replicated in every driver which wants to utilize
      this mechanism. That's unwanted code duplication and error prone.
      
      In order to move this into generic facilities it is required to have a
      mechanism, which allows the recalculation of the interrupt sets and their
      size, in the core code. As the core code does not have any knowledge about the
      underlying device, a driver specific callback is required in struct
      irq_affinity, which can be invoked by the core code. The callback gets the
      number of available interupts as an argument, so the driver can calculate the
      corresponding number and size of interrupt sets.
      
      At the moment the struct irq_affinity pointer which is handed in from the
      driver and passed through to several core functions is marked 'const', but for
      the callback to be able to modify the data in the struct it's required to
      remove the 'const' qualifier.
      
      Add the optional callback to struct irq_affinity, which allows drivers to
      recalculate the number and size of interrupt sets and remove the 'const'
      qualifier.
      
      For simple invocations, which do not supply a callback, a default callback
      is installed, which just sets nr_sets to 1 and transfers the number of
      spreadable vectors to the set_size array at index 0.
      
      This is for now guarded by a check for nr_sets != 0 to keep the NVME driver
      working until it is converted to the callback mechanism.
      
      To make sure that the driver configuration is correct under all circumstances
      the callback is invoked even when there are no interrupts for queues left,
      i.e. the pre/post requirements already exhaust the numner of available
      interrupts.
      
      At the PCI layer irq_create_affinity_masks() has to be invoked even for the
      case where the legacy interrupt is used. That ensures that the callback is
      invoked and the device driver can adjust to that situation.
      
      [ tglx: Fixed the simple case (no sets required). Moved the sanity check
        	for nr_sets after the invocation of the callback so it catches
        	broken drivers. Fixed the kernel doc comments for struct
        	irq_affinity and de-'This patch'-ed the changelog ]
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bjorn Helgaas <helgaas@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: linux-nvme@lists.infradead.org
      Cc: linux-pci@vger.kernel.org
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Sumit Saxena <sumit.saxena@broadcom.com>
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Shivasharan Srikanteshwara <shivasharan.srikanteshwara@broadcom.com>
      Link: https://lkml.kernel.org/r/20190216172228.512444498@linutronix.de
      
      c66d4bd1
  17. 23 1月, 2019 1 次提交
  18. 02 1月, 2019 1 次提交
  19. 20 12月, 2018 1 次提交
  20. 07 12月, 2018 1 次提交
  21. 05 12月, 2018 1 次提交
  22. 13 10月, 2018 1 次提交
  23. 11 10月, 2018 2 次提交
    • C
      PCI: Remove pci_unmap_addr() wrappers for DMA API · 18b01b16
      Christoph Hellwig 提交于
      Only some of these were still used by the cxgb4 driver, and that despite
      the fact that the driver otherwise uses the generic DMA API.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      18b01b16
    • L
      PCI/P2PDMA: Support peer-to-peer memory · 52916982
      Logan Gunthorpe 提交于
      Some PCI devices may have memory mapped in a BAR space that's intended for
      use in peer-to-peer transactions.  To enable such transactions the memory
      must be registered with ZONE_DEVICE pages so it can be used by DMA
      interfaces in existing drivers.
      
      Add an interface for other subsystems to find and allocate chunks of P2P
      memory as necessary to facilitate transfers between two PCI peers:
      
        struct pci_dev *pci_p2pmem_find[_many]();
        int pci_p2pdma_distance[_many]();
        void *pci_alloc_p2pmem();
      
      The new interface requires a driver to collect a list of client devices
      involved in the transaction then call pci_p2pmem_find() to obtain any
      suitable P2P memory.  Alternatively, if the caller knows a device which
      provides P2P memory, they can use pci_p2pdma_distance() to determine if it
      is usable.  With a suitable p2pmem device, memory can then be allocated
      with pci_alloc_p2pmem() for use in DMA transactions.
      
      Depending on hardware, using peer-to-peer memory may reduce the bandwidth
      of the transfer but can significantly reduce pressure on system memory.
      This may be desirable in many cases: for example a system could be designed
      with a small CPU connected to a PCIe switch by a small number of lanes
      which would maximize the number of lanes available to connect to NVMe
      devices.
      
      The code is designed to only utilize the p2pmem device if all the devices
      involved in a transfer are behind the same PCI bridge.  This is because we
      have no way of knowing whether peer-to-peer routing between PCIe Root Ports
      is supported (PCIe r4.0, sec 1.3.1).  Additionally, the benefits of P2P
      transfers that go through the RC is limited to only reducing DRAM usage
      and, in some cases, coding convenience.  The PCI-SIG may be exploring
      adding a new capability bit to advertise whether this is possible for
      future hardware.
      
      This commit includes significant rework and feedback from Christoph
      Hellwig.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      [bhelgaas: fold in fix from Keith Busch <keith.busch@intel.com>:
      https://lore.kernel.org/linux-pci/20181012155920.15418-1-keith.busch@intel.com,
      to address comment from Dan Carpenter <dan.carpenter@oracle.com>, fold in
      https://lore.kernel.org/linux-pci/20181017160510.17926-1-logang@deltatee.com]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      52916982
  24. 03 10月, 2018 1 次提交
  25. 29 9月, 2018 1 次提交
    • F
      PCI: Add support for Immediate Readiness · d6112f8d
      Felipe Balbi 提交于
      PCIe r4.0, sec 7.5.1.1.4 defines a new bit in the Status Register:
      
        Immediate Readiness – This optional bit, when Set, indicates the Function
        is guaranteed to be ready to successfully complete valid configuration
        accesses at any time following any reset that the host is capable of
        issuing Configuration Requests to this Function.
      
        When this bit is Set, for accesses to this Function, software is exempt
        from all requirements to delay configuration accesses following any type
        of reset, including but not limited to the timing requirements defined in
        Section 6.6.
      
      This means that all delays after a Conventional or Function Reset can be
      skipped.
      
      This patch reads such bit and caches its value in a flag inside struct
      pci_dev to be checked later if we should delay or can skip delays after a
      reset.  While at that, also move the explicit msleep(100) call from
      pcie_flr() and pci_af_flr() to pci_dev_wait().
      Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com>
      [bhelgaas: rename PCI_STATUS_IMMEDIATE to PCI_STATUS_IMM_READY]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      d6112f8d
  26. 12 9月, 2018 1 次提交
  27. 23 8月, 2018 1 次提交