1. 08 6月, 2018 1 次提交
  2. 25 5月, 2018 1 次提交
  3. 24 5月, 2018 3 次提交
  4. 02 5月, 2018 1 次提交
    • S
      PCI: hv: Make sure the bus domain is really unique · 29927dfb
      Sridhar Pitchai 提交于
      When Linux runs as a guest VM in Hyper-V and Hyper-V adds the virtual PCI
      bus to the guest, Hyper-V always provides unique PCI domain.
      
      commit 4a9b0933 ("PCI: hv: Use device serial number as PCI domain")
      overrode unique domain with the serial number of the first device added to
      the virtual PCI bus.
      
      The reason for that patch was to have a consistent and short name for the
      device, but Hyper-V doesn't provide unique serial numbers. Using non-unique
      serial numbers as domain IDs leads to duplicate device addresses, which
      causes PCI bus registration to fail.
      
      commit 0c195567 ("netvsc: transparent VF management") avoids the need
      for commit 4a9b0933 ("PCI: hv: Use device serial number as PCI
      domain").  When scripts were used to configure VF devices, the name of
      the VF needed to be consistent and short, but with commit 0c195567
      ("netvsc: transparent VF management") all the setup is done in the kernel,
      and we do not need to maintain consistent name.
      
      Revert commit 4a9b0933 ("PCI: hv: Use device serial number as PCI
      domain") so we can reliably support multiple devices being assigned to
      a guest.
      
      Tag the patch for stable kernels containing commit 0c195567
      ("netvsc: transparent VF management").
      
      Fixes: 4a9b0933 ("PCI: hv: Use device serial number as PCI domain")
      Signed-off-by: NSridhar Pitchai <sridhar.pitchai@microsoft.com>
      [lorenzo.pieralisi@arm.com: trimmed commit log]
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: stable@vger.kernel.org # v4.14+
      Reviewed-by: NBjorn Helgaas <bhelgaas@google.com>
      29927dfb
  5. 17 3月, 2018 5 次提交
  6. 29 1月, 2018 1 次提交
  7. 29 12月, 2017 1 次提交
    • T
      x86/apic: Switch all APICs to Fixed delivery mode · a31e58e1
      Thomas Gleixner 提交于
      Some of the APIC incarnations are operating in lowest priority delivery
      mode. This worked as long as the vector management code allocated the same
      vector on all possible CPUs for each interrupt.
      
      Lowest priority delivery mode does not necessarily respect the affinity
      setting and may redirect to some other online CPU. This was documented
      somewhere in the old code and the conversion to single target delivery
      missed to update the delivery mode of the affected APIC drivers which
      results in spurious interrupts on some of the affected CPU/Chipset
      combinations.
      
      Switch the APIC drivers over to Fixed delivery mode and remove all
      leftovers of lowest priority delivery mode.
      
      Switching to Fixed delivery mode is not a problem on these CPUs because the
      kernel already uses Fixed delivery mode for IPIs. The reason for this is
      that th SDM explicitely forbids lowest prio mode for IPIs. The reason is
      obvious: If the irq routing does not honor destination targets in lowest
      prio mode then an IPI targeted at CPU1 might end up on CPU0, which would be
      a fatal problem in many cases.
      
      As a consequence of this change, the apic::irq_delivery_mode field is now
      pointless, but this needs to be cleaned up in a separate patch.
      
      Fixes: fdba46ff ("x86/apic: Get rid of multi CPU affinity")
      Reported-by: vcaputo@pengaru.com
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: vcaputo@pengaru.com
      Cc: Pavel Machek <pavel@ucw.cz>
      Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1712281140440.1688@nanos
      a31e58e1
  8. 08 11月, 2017 1 次提交
    • D
      PCI: hv: Use effective affinity mask · 79aa801e
      Dexuan Cui 提交于
      The effective_affinity_mask is always set when an interrupt is assigned in
      __assign_irq_vector() -> apic->cpu_mask_to_apicid(), e.g. for struct apic
      apic_physflat: -> default_cpu_mask_to_apicid() ->
      irq_data_update_effective_affinity(), but it looks d->common->affinity
      remains all-1's before the user space or the kernel changes it later.
      
      In the early allocation/initialization phase of an IRQ, we should use the
      effective_affinity_mask, otherwise Hyper-V may not deliver the interrupt to
      the expected CPU.  Without the patch, if we assign 7 Mellanox ConnectX-3
      VFs to a 32-vCPU VM, one of the VFs may fail to receive interrupts.
      Tested-by: NAdrian Suhov <v-adsuho@microsoft.com>
      Signed-off-by: NDexuan Cui <decui@microsoft.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NJake Oshins <jakeo@microsoft.com>
      Cc: stable@vger.kernel.org
      Cc: Jork Loeser <jloeser@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      79aa801e
  9. 10 8月, 2017 1 次提交
  10. 04 8月, 2017 1 次提交
    • S
      PCI: hv: Do not sleep in compose_msi_msg() · 80bfeeb9
      Stephen Hemminger 提交于
      The setup of MSI with Hyper-V host was sleeping with locks held.  This
      error is reported when doing SR-IOV hotplug with kernel built with lockdep:
      
          BUG: sleeping function called from invalid context at kernel/sched/completion.c:93
          in_atomic(): 1, irqs_disabled(): 1, pid: 1405, name: ip
          3 locks held by ip/1405:
         #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff976b10bb>] rtnetlink_rcv+0x1b/0x40
         #1:  (&desc->request_mutex){+.+...}, at: [<ffffffff970ddd33>] __setup_irq+0xb3/0x720
         #2:  (&irq_desc_lock_class){-.-...}, at: [<ffffffff970ddd65>] __setup_irq+0xe5/0x720
         irq event stamp: 3476
         hardirqs last  enabled at (3475): [<ffffffff971b3005>] get_page_from_freelist+0x225/0xc90
         hardirqs last disabled at (3476): [<ffffffff978024e7>] _raw_spin_lock_irqsave+0x27/0x90
         softirqs last  enabled at (2446): [<ffffffffc05ef0b0>] ixgbevf_configure+0x380/0x7c0 [ixgbevf]
         softirqs last disabled at (2444): [<ffffffffc05ef08d>] ixgbevf_configure+0x35d/0x7c0 [ixgbevf]
      
      The workaround is to poll for host response instead of blocking on
      completion.
      Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      80bfeeb9
  11. 03 7月, 2017 5 次提交
  12. 18 4月, 2017 1 次提交
  13. 05 4月, 2017 2 次提交
  14. 24 3月, 2017 2 次提交
  15. 18 2月, 2017 1 次提交
  16. 11 2月, 2017 1 次提交
  17. 30 11月, 2016 1 次提交
  18. 17 11月, 2016 3 次提交
  19. 01 11月, 2016 1 次提交
  20. 07 9月, 2016 5 次提交
  21. 23 8月, 2016 1 次提交
  22. 26 7月, 2016 1 次提交