1. 16 10月, 2019 3 次提交
  2. 21 9月, 2019 1 次提交
  3. 07 9月, 2019 1 次提交
  4. 06 9月, 2019 1 次提交
  5. 31 8月, 2019 1 次提交
  6. 28 8月, 2019 1 次提交
  7. 12 8月, 2019 1 次提交
  8. 09 8月, 2019 1 次提交
  9. 30 7月, 2019 11 次提交
  10. 24 7月, 2019 1 次提交
  11. 09 7月, 2019 1 次提交
  12. 22 6月, 2019 1 次提交
  13. 15 6月, 2019 1 次提交
  14. 14 6月, 2019 1 次提交
  15. 13 6月, 2019 1 次提交
  16. 31 5月, 2019 1 次提交
  17. 27 5月, 2019 1 次提交
    • R
      PCI: PM: Avoid possible suspend-to-idle issue · d491f2b7
      Rafael J. Wysocki 提交于
      If a PCI driver leaves the device handled by it in D0 and calls
      pci_save_state() on the device in its ->suspend() or ->suspend_late()
      callback, it can expect the device to stay in D0 over the whole
      s2idle cycle.  However, that may not be the case if there is a
      spurious wakeup while the system is suspended, because in that case
      pci_pm_suspend_noirq() will run again after pci_pm_resume_noirq()
      which calls pci_restore_state(), via pci_pm_default_resume_early(),
      so state_saved is cleared and the second iteration of
      pci_pm_suspend_noirq() will invoke pci_prepare_to_sleep() which
      may change the power state of the device.
      
      To avoid that, add a new internal flag, skip_bus_pm, that will be set
      by pci_pm_suspend_noirq() when it runs for the first time during the
      given system suspend-resume cycle if the state of the device has
      been saved already and the device is still in D0.  Setting that flag
      will cause the next iterations of pci_pm_suspend_noirq() to set
      state_saved for pci_pm_resume_noirq(), so that it always restores the
      device state from the originally saved data, and avoid calling
      pci_prepare_to_sleep() for the device.
      
      Fixes: 33e4f80e ("ACPI / PM: Ignore spurious SCI wakeups from suspend-to-idle")
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      d491f2b7
  18. 07 5月, 2019 1 次提交
  19. 30 4月, 2019 1 次提交
  20. 23 4月, 2019 3 次提交
  21. 18 4月, 2019 1 次提交
  22. 06 4月, 2019 1 次提交
    • S
      PCI: Work around Pericom PCIe-to-PCI bridge Retrain Link erratum · 4ec73791
      Stefan Mätje 提交于
      Due to an erratum in some Pericom PCIe-to-PCI bridges in reverse mode
      (conventional PCI on primary side, PCIe on downstream side), the Retrain
      Link bit needs to be cleared manually to allow the link training to
      complete successfully.
      
      If it is not cleared manually, the link training is continuously restarted
      and no devices below the PCI-to-PCIe bridge can be accessed.  That means
      drivers for devices below the bridge will be loaded but won't work and may
      even crash because the driver is only reading 0xffff.
      
      See the Pericom Errata Sheet PI7C9X111SLB_errata_rev1.2_102711.pdf for
      details.  Devices known as affected so far are: PI7C9X110, PI7C9X111SL,
      PI7C9X130.
      
      Add a new flag, clear_retrain_link, in struct pci_dev.  Quirks for affected
      devices set this bit.
      
      Note that pcie_retrain_link() lives in aspm.c because that's currently the
      only place we use it, but this erratum is not specific to ASPM, and we may
      retrain links for other reasons in the future.
      Signed-off-by: NStefan Mätje <stefan.maetje@esd.eu>
      [bhelgaas: apply regardless of CONFIG_PCIEASPM]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      CC: stable@vger.kernel.org
      4ec73791
  23. 26 2月, 2019 1 次提交
  24. 18 2月, 2019 1 次提交
    • M
      genirq/affinity: Add new callback for (re)calculating interrupt sets · c66d4bd1
      Ming Lei 提交于
      The interrupt affinity spreading mechanism supports to spread out
      affinities for one or more interrupt sets. A interrupt set contains one or
      more interrupts. Each set is mapped to a specific functionality of a
      device, e.g. general I/O queues and read I/O queus of multiqueue block
      devices.
      
      The number of interrupts per set is defined by the driver. It depends on
      the total number of available interrupts for the device, which is
      determined by the PCI capabilites and the availability of underlying CPU
      resources, and the number of queues which the device provides and the
      driver wants to instantiate.
      
      The driver passes initial configuration for the interrupt allocation via a
      pointer to struct irq_affinity.
      
      Right now the allocation mechanism is complex as it requires to have a loop
      in the driver to determine the maximum number of interrupts which are
      provided by the PCI capabilities and the underlying CPU resources.  This
      loop would have to be replicated in every driver which wants to utilize
      this mechanism. That's unwanted code duplication and error prone.
      
      In order to move this into generic facilities it is required to have a
      mechanism, which allows the recalculation of the interrupt sets and their
      size, in the core code. As the core code does not have any knowledge about the
      underlying device, a driver specific callback is required in struct
      irq_affinity, which can be invoked by the core code. The callback gets the
      number of available interupts as an argument, so the driver can calculate the
      corresponding number and size of interrupt sets.
      
      At the moment the struct irq_affinity pointer which is handed in from the
      driver and passed through to several core functions is marked 'const', but for
      the callback to be able to modify the data in the struct it's required to
      remove the 'const' qualifier.
      
      Add the optional callback to struct irq_affinity, which allows drivers to
      recalculate the number and size of interrupt sets and remove the 'const'
      qualifier.
      
      For simple invocations, which do not supply a callback, a default callback
      is installed, which just sets nr_sets to 1 and transfers the number of
      spreadable vectors to the set_size array at index 0.
      
      This is for now guarded by a check for nr_sets != 0 to keep the NVME driver
      working until it is converted to the callback mechanism.
      
      To make sure that the driver configuration is correct under all circumstances
      the callback is invoked even when there are no interrupts for queues left,
      i.e. the pre/post requirements already exhaust the numner of available
      interrupts.
      
      At the PCI layer irq_create_affinity_masks() has to be invoked even for the
      case where the legacy interrupt is used. That ensures that the callback is
      invoked and the device driver can adjust to that situation.
      
      [ tglx: Fixed the simple case (no sets required). Moved the sanity check
        	for nr_sets after the invocation of the callback so it catches
        	broken drivers. Fixed the kernel doc comments for struct
        	irq_affinity and de-'This patch'-ed the changelog ]
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bjorn Helgaas <helgaas@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: linux-nvme@lists.infradead.org
      Cc: linux-pci@vger.kernel.org
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Sumit Saxena <sumit.saxena@broadcom.com>
      Cc: Kashyap Desai <kashyap.desai@broadcom.com>
      Cc: Shivasharan Srikanteshwara <shivasharan.srikanteshwara@broadcom.com>
      Link: https://lkml.kernel.org/r/20190216172228.512444498@linutronix.de
      
      c66d4bd1
  25. 23 1月, 2019 1 次提交
  26. 02 1月, 2019 1 次提交