- 12 7月, 2022 1 次提交
-
-
由 Samuel Holland 提交于
The cpumask that is passed to this function ultimately comes from irq_data_get_effective_affinity_mask(), which was recently changed to return a const cpumask pointer. The first level of functions handling the affinity mask were updated, but not this helper function. Fixes: 4d0b8298 ("genirq: Return a const cpumask from irq_data_get_affinity_mask") Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NSamuel Holland <samuel@sholland.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20220708004931.1672-1-samuel@sholland.orgSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 14 5月, 2022 2 次提交
-
-
由 Andrea Parri (Microsoft) 提交于
[ Similarly to commit a765ed47 ("PCI: hv: Fix synchronization between channel callback and hv_compose_msi_msg()"): ] The (on-stack) teardown packet becomes invalid once the completion timeout in hv_pci_bus_exit() has expired and hv_pci_bus_exit() has returned. Prevent the channel callback from accessing the invalid packet by removing the ID associated to such packet from the VMbus requestor in hv_pci_bus_exit(). Signed-off-by: NAndrea Parri (Microsoft) <parri.andrea@gmail.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20220511223207.3386-3-parri.andrea@gmail.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
由 Andrea Parri (Microsoft) 提交于
For additional robustness in the face of Hyper-V errors or malicious behavior, validate all values that originate from packets that Hyper-V has sent to the guest in the host-to-guest ring buffer. Ensure that invalid values cannot cause data being copied out of the bounds of the source buffer in hv_pci_onchannelcallback(). While at it, remove a redundant validation in hv_pci_generic_compl(): hv_pci_onchannelcallback() already ensures that all processed incoming packets are "at least as large as [in fact larger than] a response". Signed-off-by: NAndrea Parri (Microsoft) <parri.andrea@gmail.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20220511223207.3386-2-parri.andrea@gmail.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 12 5月, 2022 2 次提交
-
-
由 Jeffrey Hugo 提交于
According to Dexuan, the hypervisor folks beleive that multi-msi allocations are not correct. compose_msi_msg() will allocate multi-msi one by one. However, multi-msi is a block of related MSIs, with alignment requirements. In order for the hypervisor to allocate properly aligned and consecutive entries in the IOMMU Interrupt Remapping Table, there should be a single mapping request that requests all of the multi-msi vectors in one shot. Dexuan suggests detecting the multi-msi case and composing a single request related to the first MSI. Then for the other MSIs in the same block, use the cached information. This appears to be viable, so do it. Suggested-by: NDexuan Cui <decui@microsoft.com> Signed-off-by: NJeffrey Hugo <quic_jhugo@quicinc.com> Reviewed-by: NDexuan Cui <decui@microsoft.com> Tested-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1652282599-21643-1-git-send-email-quic_jhugo@quicinc.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
由 Jeffrey Hugo 提交于
Currently if compose_msi_msg() is called multiple times, it will free any previous IRTE allocation, and generate a new allocation. While nothing prevents this from occurring, it is extraneous when Linux could just reuse the existing allocation and avoid a bunch of overhead. However, when future IRTE allocations operate on blocks of MSIs instead of a single line, freeing the allocation will impact all of the lines. This could cause an issue where an allocation of N MSIs occurs, then some of the lines are retargeted, and finally the allocation is freed/reallocated. The freeing of the allocation removes all of the configuration for the entire block, which requires all the lines to be retargeted, which might not happen since some lines might already be unmasked/active. Signed-off-by: NJeffrey Hugo <quic_jhugo@quicinc.com> Reviewed-by: NDexuan Cui <decui@microsoft.com> Tested-by: NDexuan Cui <decui@microsoft.com> Tested-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1652282582-21595-1-git-send-email-quic_jhugo@quicinc.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 03 5月, 2022 1 次提交
-
-
由 Dexuan Cui 提交于
Currently when the pci-hyperv driver finishes probing and initializing the PCI device, it sets the PCI_COMMAND_MEMORY bit; later when the PCI device is registered to the core PCI subsystem, the core PCI driver's BAR detection and initialization code toggles the bit multiple times, and each toggling of the bit causes the hypervisor to unmap/map the virtual BARs from/to the physical BARs, which can be slow if the BAR sizes are huge, e.g., a Linux VM with 14 GPU devices has to spend more than 3 minutes on BAR detection and initialization, causing a long boot time. Reduce the boot time by not setting the PCI_COMMAND_MEMORY bit when we register the PCI device (there is no need to have it set in the first place). The bit stays off till the PCI device driver calls pci_enable_device(). With this change, the boot time of such a 14-GPU VM is reduced by almost 3 minutes. Link: https://lore.kernel.org/lkml/20220419220007.26550-1-decui@microsoft.com/Tested-by: NBoqun Feng (Microsoft) <boqun.feng@gmail.com> Signed-off-by: NDexuan Cui <decui@microsoft.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Jake Oshins <jakeo@microsoft.com> Link: https://lore.kernel.org/r/20220502074255.16901-1-decui@microsoft.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 28 4月, 2022 1 次提交
-
-
由 Jeffrey Hugo 提交于
In the multi-MSI case, hv_arch_irq_unmask() will only operate on the first MSI of the N allocated. This is because only the first msi_desc is cached and it is shared by all the MSIs of the multi-MSI block. This means that hv_arch_irq_unmask() gets the correct address, but the wrong data (always 0). This can break MSIs. Lets assume MSI0 is vector 34 on CPU0, and MSI1 is vector 33 on CPU0. hv_arch_irq_unmask() is called on MSI0. It uses a hypercall to configure the MSI address and data (0) to vector 34 of CPU0. This is correct. Then hv_arch_irq_unmask is called on MSI1. It uses another hypercall to configure the MSI address and data (0) to vector 33 of CPU0. This is wrong, and results in both MSI0 and MSI1 being routed to vector 33. Linux will observe extra instances of MSI1 and no instances of MSI0 despite the endpoint device behaving correctly. For the multi-MSI case, we need unique address and data info for each MSI, but the cached msi_desc does not provide that. However, that information can be gotten from the int_desc cached in the chip_data by compose_msi_msg(). Fix the multi-MSI case to use that cached information instead. Since hv_set_msi_entry_from_desc() is no longer applicable, remove it. Signed-off-by: NJeffrey Hugo <quic_jhugo@quicinc.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1651068453-29588-1-git-send-email-quic_jhugo@quicinc.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 25 4月, 2022 3 次提交
-
-
由 Andrea Parri (Microsoft) 提交于
Dexuan wrote: "[...] when we disable AccelNet, the host PCI VSP driver sends a PCI_EJECT message first, and the channel callback may set hpdev->state to hv_pcichild_ejecting on a different CPU. This can cause hv_compose_msi_msg() to exit from the loop and 'return', and the on-stack variable 'ctxt' is invalid. Now, if the response message from the host arrives, the channel callback will try to access the invalid 'ctxt' variable, and this may cause a crash." Schematically: Hyper-V sends PCI_EJECT msg hv_pci_onchannelcallback() state = hv_pcichild_ejecting hv_compose_msi_msg() alloc and init comp_pkt state == hv_pcichild_ejecting Hyper-V sends VM_PKT_COMP msg hv_pci_onchannelcallback() retrieve address of comp_pkt 'free' comp_pkt and return comp_pkt->completion_func() Dexuan also showed how the crash can be triggered after introducing suitable delays in the driver code, thus validating the 'assumption' that the host can still normally respond to the guest's compose_msi request after the host has started to eject the PCI device. Fix the synchronization by leveraging the requestor lock as follows: - Before 'return'-ing in hv_compose_msi_msg(), remove the ID (while holding the requestor lock) associated to the completion packet. - Retrieve the address *and call ->completion_func() within a same (requestor) critical section in hv_pci_onchannelcallback(). Reported-by: NWei Hu <weh@microsoft.com> Reported-by: NDexuan Cui <decui@microsoft.com> Suggested-by: NMichael Kelley <mikelley@microsoft.com> Signed-off-by: NAndrea Parri (Microsoft) <parri.andrea@gmail.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20220419122325.10078-7-parri.andrea@gmail.comSigned-off-by: NWei Liu <wei.liu@kernel.org> -
由 Andrea Parri (Microsoft) 提交于
Currently, pointers to guest memory are passed to Hyper-V as transaction IDs in hv_pci. In the face of errors or malicious behavior in Hyper-V, hv_pci should not expose or trust the transaction IDs returned by Hyper-V to be valid guest memory addresses. Instead, use small integers generated by vmbus_requestor as request (transaction) IDs. Suggested-by: NMichael Kelley <mikelley@microsoft.com> Signed-off-by: NAndrea Parri (Microsoft) <parri.andrea@gmail.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20220419122325.10078-3-parri.andrea@gmail.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
由 Jeffrey Hugo 提交于
If the allocation of multiple MSI vectors for multi-MSI fails in the core PCI framework, the framework will retry the allocation as a single MSI vector, assuming that meets the min_vecs specified by the requesting driver. Hyper-V advertises that multi-MSI is supported, but reuses the VECTOR domain to implement that for x86. The VECTOR domain does not support multi-MSI, so the alloc will always fail and fallback to a single MSI allocation. In short, Hyper-V advertises a capability it does not implement. Hyper-V can support multi-MSI because it coordinates with the hypervisor to map the MSIs in the IOMMU's interrupt remapper, which is something the VECTOR domain does not have. Therefore the fix is simple - copy what the x86 IOMMU drivers (AMD/Intel-IR) do by removing X86_IRQ_ALLOC_CONTIGUOUS_VECTORS after calling the VECTOR domain's pci_msi_prepare(). Fixes: 4daace0d ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs") Signed-off-by: NJeffrey Hugo <quic_jhugo@quicinc.com> Reviewed-by: NDexuan Cui <decui@microsoft.com> Link: https://lore.kernel.org/r/1649856981-14649-1-git-send-email-quic_jhugo@quicinc.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 31 3月, 2022 1 次提交
-
-
由 YueHaibing 提交于
Fix the following build error: drivers/pci/controller/pci-hyperv.c:769:13: error: ‘hv_set_msi_entry_from_desc’ defined but not used [-Werror=unused-function] 769 | static void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry, The arm64 implementation of hv_set_msi_entry_from_desc() is not used after d06957d7 ("PCI: hv: Avoid the retarget interrupt hypercall in irq_unmask() on ARM64"), so remove it. Fixes: d06957d7 ("PCI: hv: Avoid the retarget interrupt hypercall in irq_unmask() on ARM64") Link: https://lore.kernel.org/r/20220317085130.36388-1-yuehaibing@huawei.comSigned-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NNathan Chancellor <nathan@kernel.org> Acked-by: NBoqun Feng <boqun.feng@gmail.com>
-
- 29 3月, 2022 1 次提交
-
-
由 Michael Kelley 提交于
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus device and as a PCI device. The coherence of the VMbus device is set based on the VMbus node in ACPI, but the PCI device has no ACPI node and defaults to not hardware coherent. This results in extra software coherence management overhead on ARM64 when devices are hardware coherent. Fix this by setting up the PCI host bus so that normal PCI mechanisms will propagate the coherence of the VMbus device to the PCI device. There's no effect on x86/x64 where devices are always hardware coherent. Signed-off-by: NMichael Kelley <mikelley@microsoft.com> Acked-by: NBoqun Feng <boqun.feng@gmail.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1648138492-2191-3-git-send-email-mikelley@microsoft.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 02 3月, 2022 1 次提交
-
-
由 Boqun Feng 提交于
On ARM64 Hyper-V guests, SPIs are used for the interrupts of virtual PCI devices, and SPIs can be managed directly via GICD registers. Therefore the retarget interrupt hypercall is not needed on ARM64. An arch-specific interface hv_arch_irq_unmask() is introduced to handle the architecture level differences on this. For x86, the behavior remains unchanged, while for ARM64 no hypercall is invoked when unmasking an irq for virtual PCI devices. Link: https://lore.kernel.org/r/20220217034525.1687678-1-boqun.feng@gmail.comSigned-off-by: NBoqun Feng <boqun.feng@gmail.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
-
- 03 2月, 2022 1 次提交
-
-
由 Long Li 提交于
When kernel boots with a NUMA topology with some NUMA nodes offline, the PCI driver should only set an online NUMA node on the device. This can happen during KDUMP where some NUMA nodes are not made online by the KDUMP kernel. This patch also fixes the case where kernel is booting with "numa=off". Fixes: 999dd956 ("PCI: hv: Add support for protocol 1.3 and support PCI_BUS_RELATIONS2") Signed-off-by: NLong Li <longli@microsoft.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Tested-by: NPurna Pavan Chandra Aekkaladevi <paekkaladevi@microsoft.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/1643247814-15184-1-git-send-email-longli@linuxonhyperv.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 12 1月, 2022 2 次提交
-
-
由 Sunil Muthuswamy 提交于
Add arm64 Hyper-V vPCI support by implementing the arch specific interfaces. Introduce an IRQ domain and chip specific to Hyper-v vPCI that is based on SPIs. The IRQ domain parents itself to the arch GIC IRQ domain for basic vector management. [bhelgaas: squash in fix from Yang Li <yang.lee@linux.alibaba.com>: https://lore.kernel.org/r/20220112003324.62755-1-yang.lee@linux.alibaba.com] Link: https://lore.kernel.org/r/1641411156-31705-3-git-send-email-sunilmut@linux.microsoft.comSigned-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
-
由 Sunil Muthuswamy 提交于
Encapsulate arch dependencies in Hyper-V vPCI through a set of arch-dependent interfaces. Adding these arch specific interfaces will allow for an implementation for other architectures, such as arm64. There are no functional changes expected from this patch. Link: https://lore.kernel.org/r/1641411156-31705-2-git-send-email-sunilmut@linux.microsoft.comSigned-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NBoqun Feng <boqun.feng@gmail.com> Reviewed-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
-
- 17 12月, 2021 1 次提交
-
-
由 Thomas Gleixner 提交于
Replace the about to vanish iterators and make use of the filtering. Take the descriptor lock around the iterators. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NMichael Kelley <mikelley@microsoft.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20211206210748.629363944@linutronix.de
-
- 19 11月, 2021 1 次提交
-
-
由 Naveen Naidu 提交于
Include PCI_ERROR_RESPONSE along with 0xFFFFFFFF in the comment about identifying config read errors. This makes checks for config read errors easier to find. Comment change only. Link: https://lore.kernel.org/r/12124f41cab7d8aa944de05f85d9567bfe157704.1637243717.git.naveennaidu479@gmail.comSigned-off-by: NNaveen Naidu <naveennaidu479@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 13 10月, 2021 1 次提交
-
-
由 Krzysztof Wilczyński 提交于
"dom_req" is a u16 but varargs automatically promotes it to int, so there's no point in using the %h modifier. Drop it. See cbacb5ab ("docs: printk-formats: Stop encouraging use of unnecessary %h[xudi] and %hh[xudi]") and 70eb2275 ("checkpatch: add warning for unnecessary use of %h[xudi] and %hh[xudi]"). Link: https://lore.kernel.org/r/20211008222732.2868493-1-kw@linux.comSigned-off-by: NKrzysztof Wilczyński <kw@linux.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 24 9月, 2021 1 次提交
-
-
由 Long Li 提交于
In hv_pci_bus_exit, the code is holding a spinlock while calling pci_destroy_slot(), which takes a mutex. This is not safe for spinlock. Fix this by moving the children to be deleted to a list on the stack, and removing them after spinlock is released. Fixes: 94d22763 ("PCI: hv: Fix a race condition when removing the device") Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Wei Liu <wei.liu@kernel.org> Cc: Dexuan Cui <decui@microsoft.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Rob Herring <robh@kernel.org> Cc: "Krzysztof Wilczyński" <kw@linux.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Michael Kelley <mikelley@microsoft.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Link: https://lore.kernel.org/linux-hyperv/20210823152130.GA21501@kili/Signed-off-by: NLong Li <longli@microsoft.com> Reviewed-by: NWei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/1630365207-20616-1-git-send-email-longli@linuxonhyperv.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 23 8月, 2021 4 次提交
-
-
由 Boqun Feng 提交于
Now we have everything we need, just provide a proper sysdata type for the bus to use on ARM64 and everything else works. Link: https://lore.kernel.org/r/20210726180657.142727-9-boqun.feng@gmail.comSigned-off-by: NBoqun Feng <boqun.feng@gmail.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Boqun Feng 提交于
Since PCI_HYPERV depends on PCI_MSI_IRQ_DOMAIN which selects GENERIC_MSI_IRQ_DOMAIN, we can use dev_set_msi_domain() to set up the MSI domain at probing time, and this works for both x86 and ARM64. Therefore use it as the preparation for ARM64 Hyper-V PCI support. As a result, no longer need to maintain ->fwnode in x86 specific pci_sysdata, and make hv_pcibus_device own it instead. Link: https://lore.kernel.org/r/20210726180657.142727-8-boqun.feng@gmail.comSigned-off-by: NBoqun Feng <boqun.feng@gmail.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Boqun Feng 提交于
No functional change, just store and maintain the PCI domain number in the ->domain_nr of pci_host_bridge. Note that we still need to keep the copy of domain number in x86-specific pci_sysdata, because x86 is not a PCI_DOMAINS_GENERIC=y architecture, so the ->domain_nr of pci_host_bridge doesn't work for it yet. Link: https://lore.kernel.org/r/20210726180657.142727-7-boqun.feng@gmail.comSigned-off-by: NBoqun Feng <boqun.feng@gmail.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Arnd Bergmann 提交于
In order to support ARM64 Hyper-V PCI, we need to set up the bridge at probing time because ARM64 is a PCI_DOMAIN_GENERIC=y arch and we don't have pci_config_window (ARM64 sysdata) for a PCI root bus on Hyper-V, so it's impossible to retrieve the information (e.g. PCI domains, MSI domains) from bus sysdata on ARM64 after creation. Originally in create_root_hv_pci_bus(), pci_create_root_bus() is used to create the root bus and the corresponding bridge based on x86 sysdata. Now we create a bridge first and then call pci_scan_root_bus_bridge(), which allows us to do the necessary set-ups for the bridge. Link: https://lore.kernel.org/r/20210726180657.142727-6-boqun.feng@gmail.comSigned-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBoqun Feng <boqun.feng@gmail.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- 13 8月, 2021 1 次提交
-
-
由 Sunil Muthuswamy 提交于
Hyper-V vPCI protocol version 1_4 adds support for create interrupt v3. Create interrupt v3 essentially makes the size of the vector field bigger in the message, thereby allowing bigger vector values. For example, that will come into play for supporting LPI vectors on ARM, which start at 8192. Link: https://lore.kernel.org/r/MW4PR21MB20026A6EA554A0B9EC696AA8C0159@MW4PR21MB2002.namprd21.prod.outlook.comSigned-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Reviewed-by: NWei Liu <wei.liu@kernel.org>
-
- 21 6月, 2021 1 次提交
-
-
由 Haiyang Zhang 提交于
Add check for hv_is_hyperv_initialized() at the top of init_hv_pci_drv(), so if the pci-hyperv driver is force-loaded on non Hyper-V platforms, the init_hv_pci_drv() will exit immediately, without any side effects, like assignments to hvpci_block_ops, etc. Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Reported-and-tested-by: NMohammad Alqayeem <mohammad.alqyeem@nutanix.com> Reviewed-by: NWei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/1621984653-1210-1-git-send-email-haiyangz@microsoft.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 04 6月, 2021 2 次提交
-
-
由 Long Li 提交于
With the new method of flushing/stopping the workqueue before doing bus removal, the old mechanism of using refcount and wait for completion is no longer needed. Remove those dead code. Link: https://lore.kernel.org/r/1620806809-31055-1-git-send-email-longli@linuxonhyperv.comSigned-off-by: NLong Li <longli@microsoft.com> [lorenzo.pieralisi@arm.com: Reworded subject] Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
-
由 Long Li 提交于
On removing the device, any work item (hv_pci_devices_present() or hv_pci_eject_device()) scheduled on workqueue hbus->wq may still be running and race with hv_pci_remove(). This can happen because the host may send PCI_EJECT or PCI_BUS_RELATIONS(2) and decide to rescind the channel immediately after that. Fix this by flushing/destroying the workqueue of hbus before doing hbus remove. Link: https://lore.kernel.org/r/1620806800-30983-1-git-send-email-longli@linuxonhyperv.comSigned-off-by: NLong Li <longli@microsoft.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
-
- 21 4月, 2021 1 次提交
-
-
由 Joseph Salisbury 提交于
There is not a consistent pattern for checking Hyper-V hypercall status. Existing code uses a number of variants. The variants work, but a consistent pattern would improve the readability of the code, and be more conformant to what the Hyper-V TLFS says about hypercall status. Implemented new helper functions hv_result(), hv_result_success(), and hv_repcomp(). Changed the places where hv_do_hypercall() and related variants are used to use the helper functions. Signed-off-by: NJoseph Salisbury <joseph.salisbury@microsoft.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1618620183-9967-2-git-send-email-joseph.salisbury@linux.microsoft.comSigned-off-by: NWei Liu <wei.liu@kernel.org>
-
- 20 4月, 2021 1 次提交
-
-
由 Marc Zyngier 提交于
The Hyper-V PCI driver still makes use of a msi_controller structure, but it looks more like a distant leftover than anything actually useful, since it is initialised to 0 and never used for anything. Just remove it. Link: https://lore.kernel.org/r/20210330151145.997953-7-maz@kernel.orgTested-by: NMichael Kelley <mikelley@microsoft.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 17 3月, 2021 1 次提交
-
-
The hv_compose_msi_msg() callback in irq_chip::irq_compose_msi_msg is invoked via irq_chip_compose_msi_msg(), which itself is always invoked from atomic contexts from the guts of the interrupt core code. There is no way to change this w/o rewriting the whole driver, so use tasklet_disable_in_atomic() which allows to make tasklet_disable() sleepable once the remaining atomic users are addressed. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NWei Liu <wei.liu@kernel.org> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084242.516519290@linutronix.de
-
- 11 2月, 2021 1 次提交
-
-
由 Wei Liu 提交于
We will soon use the same structure to handle IO-APIC interrupts as well. Introduce an enum to identify the source and a data structure for IO-APIC RTE. While at it, update pci-hyperv.c to use the enum. No functional change. Signed-off-by: NWei Liu <wei.liu@kernel.org> Acked-by: NRob Herring <robh@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-13-wei.liu@kernel.org
-
- 28 1月, 2021 1 次提交
-
-
由 Bjorn Helgaas 提交于
Fix misspelling of "silently". Link: https://lore.kernel.org/r/20210126213855.2923461-1-helgaas@kernel.orgSigned-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
-
- 29 10月, 2020 1 次提交
-
-
由 Thomas Gleixner 提交于
The enum ioapic_irq_destination_types and the enumerated constants starting with 'dest_' are gross misnomers because they describe the delivery mode. Rename then enum and the constants so they actually make sense. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20201024213535.443185-6-dwmw2@infradead.org
-
- 02 10月, 2020 1 次提交
-
-
由 Dexuan Cui 提交于
pci_restore_msi_state() directly writes the MSI/MSI-X related registers via MMIO. On a physical machine, this works perfectly; for a Linux VM running on a hypervisor, which typically enables IOMMU interrupt remapping, the hypervisor usually should trap and emulate the MMIO accesses in order to re-create the necessary interrupt remapping table entries in the IOMMU, otherwise the interrupts can not work in the VM after hibernation. Hyper-V is different from other hypervisors in that it does not trap and emulate the MMIO accesses, and instead it uses a para-virtualized method, which requires the VM to call hv_compose_msi_msg() to notify the hypervisor of the info that would be passed to the hypervisor in the case of the trap-and-emulate method. This is not an issue to a lot of PCI device drivers, which destroy and re-create the interrupts across hibernation, so hv_compose_msi_msg() is called automatically. However, some PCI device drivers (e.g. the in-tree GPU driver nouveau and the out-of-tree Nvidia proprietary GPU driver) do not destroy and re-create MSI/MSI-X interrupts across hibernation, so hv_pci_resume() has to call hv_compose_msi_msg(), otherwise the PCI device drivers can no longer receive interrupts after the VM resumes from hibernation. Hyper-V is also different in that chip->irq_unmask() may fail in a Linux VM running on Hyper-V (on a physical machine, chip->irq_unmask() can not fail because unmasking an MSI/MSI-X register just means an MMIO write): during hibernation, when a CPU is offlined, the kernel tries to move the interrupt to the remaining CPUs that haven't been offlined yet. In this case, hv_irq_unmask() -> hv_do_hypercall() always fails because the vmbus channel has been closed: here the early "return" in hv_irq_unmask() means the pci_msi_unmask_irq() is not called, i.e. the desc->masked remains "true", so later after hibernation, the MSI interrupt always remains masked, which is incorrect. Refer to cpu_disable_common() -> fixup_irqs() -> irq_migrate_all_off_this_cpu() -> migrate_one_irq(): static bool migrate_one_irq(struct irq_desc *desc) { ... if (maskchip && chip->irq_mask) chip->irq_mask(d); ... err = irq_do_set_affinity(d, affinity, false); ... if (maskchip && chip->irq_unmask) chip->irq_unmask(d); Fix the issue by calling pci_msi_unmask_irq() unconditionally in hv_irq_unmask(). Also suppress the error message for hibernation because the hypercall failure during hibernation does not matter (at this time all the devices have been frozen). Note: the correct affinity info is still updated into the irqdata data structure in migrate_one_irq() -> irq_do_set_affinity() -> hv_set_affinity(), so later when the VM resumes, hv_pci_restore_msi_state() is able to correctly restore the interrupt with the correct affinity. Link: https://lore.kernel.org/r/20201002085158.9168-1-decui@microsoft.com Fixes: ac82fc83 ("PCI: hv: Add hibernation support") Signed-off-by: NDexuan Cui <decui@microsoft.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NJake Oshins <jakeo@microsoft.com>
-
- 28 9月, 2020 1 次提交
-
-
由 Krzysztof Wilczyński 提交于
Add missing documentation for the parameter "version" and "num_version" of the hv_pci_protocol_negotiation() function and resolve build time kernel-doc warnings: drivers/pci/controller/pci-hyperv.c:2535: warning: Function parameter or member 'version' not described in 'hv_pci_protocol_negotiation' drivers/pci/controller/pci-hyperv.c:2535: warning: Function parameter or member 'num_version' not described in 'hv_pci_protocol_negotiation' No change to functionality intended. Signed-off-by: NKrzysztof Wilczyński <kw@linux.com> Link: https://lore.kernel.org/r/20200925234753.1767227-1-kw@linux.comReviewed-by: NMichael Kelley <mikelley@microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org>
-
- 16 9月, 2020 2 次提交
-
-
由 Thomas Gleixner 提交于
pci_msi_get_hwirq() and pci_msi_set_desc are not longer special. Enable the generic MSI domain ops in the core and PCI MSI code unconditionally and get rid of the x86 specific implementations in the X86 MSI code and in the hyperv PCI driver. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200826112332.564274859@linutronix.de
-
由 Thomas Gleixner 提交于
Convert the interrupt remap drivers to retrieve the pci device from the msi descriptor and use info::hwirq. This is the first step to prepare x86 for using the generic MSI domain ops. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NWei Liu <wei.liu@kernel.org> Acked-by: NJoerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200826112332.466405395@linutronix.de
-
- 28 7月, 2020 1 次提交
-
-
由 Wei Yongjun 提交于
sparse report build warning as follows: drivers/pci/controller/pci-hyperv.c:941:5: warning: symbol 'hv_read_config_block' was not declared. Should it be static? drivers/pci/controller/pci-hyperv.c:1021:5: warning: symbol 'hv_write_config_block' was not declared. Should it be static? drivers/pci/controller/pci-hyperv.c:1090:5: warning: symbol 'hv_register_block_invalidate' was not declared. Should it be static? Those functions are not used outside of this file, so mark them static. Link: https://lore.kernel.org/r/20200706135234.80758-1-weiyongjun1@huawei.comReported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- 27 7月, 2020 1 次提交
-
-
由 Wei Hu 提交于
Kdump could fail sometime on Hyper-V guest because the retry in hv_pci_enter_d0() releases child device structures in hv_pci_bus_exit(). Although there is a second asynchronous device relations message sending from the host, if this message arrives to the guest after hv_send_resource_allocated() is called, the retry would fail. Fix the problem by moving retry to hv_pci_probe() and start the retry from hv_pci_query_relations() call. This will cause a device relations message to arrive to the guest synchronously; the guest would then be able to rebuild the child device structures before calling hv_send_resource_allocated(). Link: https://lore.kernel.org/r/20200727071731.18516-1-weh@microsoft.com Fixes: c81992e7 ("PCI: hv: Retry PCI bus D0 entry on invalid device state") Signed-off-by: NWei Hu <weh@microsoft.com> [lorenzo.pieralisi@arm.com: fixed a comment and commit log] Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com>
-