1. 12 7月, 2022 1 次提交
  2. 14 5月, 2022 2 次提交
  3. 12 5月, 2022 2 次提交
  4. 03 5月, 2022 1 次提交
  5. 28 4月, 2022 1 次提交
    • J
      PCI: hv: Fix hv_arch_irq_unmask() for multi-MSI · 455880df
      Jeffrey Hugo 提交于
      In the multi-MSI case, hv_arch_irq_unmask() will only operate on the first
      MSI of the N allocated.  This is because only the first msi_desc is cached
      and it is shared by all the MSIs of the multi-MSI block.  This means that
      hv_arch_irq_unmask() gets the correct address, but the wrong data (always
      0).
      
      This can break MSIs.
      
      Lets assume MSI0 is vector 34 on CPU0, and MSI1 is vector 33 on CPU0.
      
      hv_arch_irq_unmask() is called on MSI0.  It uses a hypercall to configure
      the MSI address and data (0) to vector 34 of CPU0.  This is correct.  Then
      hv_arch_irq_unmask is called on MSI1.  It uses another hypercall to
      configure the MSI address and data (0) to vector 33 of CPU0.  This is
      wrong, and results in both MSI0 and MSI1 being routed to vector 33.  Linux
      will observe extra instances of MSI1 and no instances of MSI0 despite the
      endpoint device behaving correctly.
      
      For the multi-MSI case, we need unique address and data info for each MSI,
      but the cached msi_desc does not provide that.  However, that information
      can be gotten from the int_desc cached in the chip_data by
      compose_msi_msg().  Fix the multi-MSI case to use that cached information
      instead.  Since hv_set_msi_entry_from_desc() is no longer applicable,
      remove it.
      Signed-off-by: NJeffrey Hugo <quic_jhugo@quicinc.com>
      Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
      Link: https://lore.kernel.org/r/1651068453-29588-1-git-send-email-quic_jhugo@quicinc.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
      455880df
  6. 25 4月, 2022 3 次提交
  7. 31 3月, 2022 1 次提交
  8. 29 3月, 2022 1 次提交
  9. 02 3月, 2022 1 次提交
  10. 03 2月, 2022 1 次提交
  11. 12 1月, 2022 2 次提交
  12. 17 12月, 2021 1 次提交
  13. 19 11月, 2021 1 次提交
  14. 13 10月, 2021 1 次提交
  15. 24 9月, 2021 1 次提交
  16. 23 8月, 2021 4 次提交
  17. 13 8月, 2021 1 次提交
  18. 21 6月, 2021 1 次提交
  19. 04 6月, 2021 2 次提交
  20. 21 4月, 2021 1 次提交
  21. 20 4月, 2021 1 次提交
  22. 17 3月, 2021 1 次提交
  23. 11 2月, 2021 1 次提交
  24. 28 1月, 2021 1 次提交
  25. 29 10月, 2020 1 次提交
  26. 02 10月, 2020 1 次提交
    • D
      PCI: hv: Fix hibernation in case interrupts are not re-created · 915cff7f
      Dexuan Cui 提交于
      pci_restore_msi_state() directly writes the MSI/MSI-X related registers
      via MMIO. On a physical machine, this works perfectly; for a Linux VM
      running on a hypervisor, which typically enables IOMMU interrupt remapping,
      the hypervisor usually should trap and emulate the MMIO accesses in order
      to re-create the necessary interrupt remapping table entries in the IOMMU,
      otherwise the interrupts can not work in the VM after hibernation.
      
      Hyper-V is different from other hypervisors in that it does not trap and
      emulate the MMIO accesses, and instead it uses a para-virtualized method,
      which requires the VM to call hv_compose_msi_msg() to notify the hypervisor
      of the info that would be passed to the hypervisor in the case of the
      trap-and-emulate method. This is not an issue to a lot of PCI device
      drivers, which destroy and re-create the interrupts across hibernation, so
      hv_compose_msi_msg() is called automatically. However, some PCI device
      drivers (e.g. the in-tree GPU driver nouveau and the out-of-tree Nvidia
      proprietary GPU driver) do not destroy and re-create MSI/MSI-X interrupts
      across hibernation, so hv_pci_resume() has to call hv_compose_msi_msg(),
      otherwise the PCI device drivers can no longer receive interrupts after
      the VM resumes from hibernation.
      
      Hyper-V is also different in that chip->irq_unmask() may fail in a
      Linux VM running on Hyper-V (on a physical machine, chip->irq_unmask()
      can not fail because unmasking an MSI/MSI-X register just means an MMIO
      write): during hibernation, when a CPU is offlined, the kernel tries
      to move the interrupt to the remaining CPUs that haven't been offlined
      yet. In this case, hv_irq_unmask() -> hv_do_hypercall() always fails
      because the vmbus channel has been closed: here the early "return" in
      hv_irq_unmask() means the pci_msi_unmask_irq() is not called, i.e. the
      desc->masked remains "true", so later after hibernation, the MSI interrupt
      always remains masked, which is incorrect. Refer to cpu_disable_common()
      -> fixup_irqs() -> irq_migrate_all_off_this_cpu() -> migrate_one_irq():
      
      static bool migrate_one_irq(struct irq_desc *desc)
      {
      ...
              if (maskchip && chip->irq_mask)
                      chip->irq_mask(d);
      ...
              err = irq_do_set_affinity(d, affinity, false);
      ...
              if (maskchip && chip->irq_unmask)
                      chip->irq_unmask(d);
      
      Fix the issue by calling pci_msi_unmask_irq() unconditionally in
      hv_irq_unmask(). Also suppress the error message for hibernation because
      the hypercall failure during hibernation does not matter (at this time
      all the devices have been frozen). Note: the correct affinity info is
      still updated into the irqdata data structure in migrate_one_irq() ->
      irq_do_set_affinity() -> hv_set_affinity(), so later when the VM
      resumes, hv_pci_restore_msi_state() is able to correctly restore
      the interrupt with the correct affinity.
      
      Link: https://lore.kernel.org/r/20201002085158.9168-1-decui@microsoft.com
      Fixes: ac82fc83 ("PCI: hv: Add hibernation support")
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reviewed-by: NJake Oshins <jakeo@microsoft.com>
      915cff7f
  27. 28 9月, 2020 1 次提交
  28. 16 9月, 2020 2 次提交
  29. 28 7月, 2020 1 次提交
  30. 27 7月, 2020 1 次提交