1. 02 12月, 2010 1 次提交
    • S
      xen: fix MSI setup and teardown for PV on HVM guests · af42b8d1
      Stefano Stabellini 提交于
      When remapping MSIs into pirqs for PV on HVM guests, qemu is responsible
      for doing the actual mapping and unmapping.
      We only give qemu the desired pirq number when we ask to do the mapping
      the first time, after that we should be reading back the pirq number
      from qemu every time we want to re-enable the MSI.
      
      This fixes a bug in xen_hvm_setup_msi_irqs that manifests itself when
      trying to enable the same MSI for the second time: the old MSI to pirq
      mapping is still valid at this point but xen_hvm_setup_msi_irqs would
      try to assign a new pirq anyway.
      A simple way to reproduce this bug is to assign an MSI capable network
      card to a PV on HVM guest, if the user brings down the corresponding
      ethernet interface and up again, Linux would fail to enable MSIs on the
      device.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      af42b8d1
  2. 12 11月, 2010 1 次提交
    • B
      x86/PCI: coalesce overlapping host bridge windows · 4723d0f2
      Bjorn Helgaas 提交于
      Some BIOSes provide PCI host bridge windows that overlap, e.g.,
      
          pci_root PNP0A03:00: host bridge window [mem 0xb0000000-0xffffffff]
          pci_root PNP0A03:00: host bridge window [mem 0xafffffff-0xdfffffff]
          pci_root PNP0A03:00: host bridge window [mem 0xf0000000-0xffffffff]
      
      If we simply insert these as children of iomem_resource, the second window
      fails because it conflicts with the first, and the third is inserted as a
      child of the first, i.e.,
      
          b0000000-ffffffff PCI Bus 0000:00
            f0000000-ffffffff PCI Bus 0000:00
      
      When we claim PCI device resources, this can cause collisions like this
      if we put them in the first window:
      
          pci 0000:00:01.0: address space collision: [mem 0xff300000-0xff4fffff] conflicts with PCI Bus 0000:00 [mem 0xf0000000-0xffffffff]
      
      Host bridge windows are top-level resources by definition, so it doesn't
      make sense to make the third window a child of the first.  This patch
      coalesces any host bridge windows that overlap.  For the example above,
      the result is this single window:
      
          pci_root PNP0A03:00: host bridge window [mem 0xafffffff-0xffffffff]
      
      This fixes a 2.6.34 regression.
      
      Reference: https://bugzilla.kernel.org/show_bug.cgi?id=17011Reported-and-tested-by: NAnisse Astier <anisse@astier.eu>
      Reported-and-tested-by: NPramod Dematagoda <pmd.lotr.gandalf@gmail.com>
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      4723d0f2
  3. 09 11月, 2010 1 次提交
  4. 27 10月, 2010 1 次提交
  5. 23 10月, 2010 7 次提交
  6. 21 10月, 2010 1 次提交
  7. 18 10月, 2010 5 次提交
  8. 16 10月, 2010 2 次提交
  9. 24 9月, 2010 1 次提交
  10. 31 7月, 2010 4 次提交
    • K
      x86/PCI: use for_each_pci_dev() · 1f7979ac
      Kulikov Vasiliy 提交于
      Use for_each_pci_dev() to simplify the code.
      Signed-off-by: NKulikov Vasiliy <segooon@gmail.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      1f7979ac
    • B
      x86/PCI: use host bridge _CRS info on ASRock ALiveSATA2-GLAN · 2491762c
      Bjorn Helgaas 提交于
      This DMI quirk turns on "pci=use_crs" for the ALiveSATA2-GLAN because
      amd_bus.c doesn't handle this system correctly.
      
      The system has a single HyperTransport I/O chain, but has two PCI host
      bridges to buses 00 and 80.  amd_bus.c learns the MMIO range associated
      with buses 00-ff and that this range is routed to the HT chain hosted at
      node 0, link 0:
      
          bus: [00, ff] on node 0 link 0
          bus: 00 index 1 [mem 0x80000000-0xfcffffffff]
      
      This includes the address space for both bus 00 and bus 80, and amd_bus.c
      assumes it's all routed to bus 00.
      
      We find device 80:01.0, which BIOS left in the middle of that space, but
      we don't find a bridge from bus 00 to bus 80, so we conclude that 80:01.0
      is unreachable from bus 00, and we move it from the original, working,
      address to something outside the bus 00 aperture, which does not work:
      
          pci 0000:80:01.0: reg 10: [mem 0xfebfc000-0xfebfffff 64bit]
          pci 0000:80:01.0: BAR 0: assigned [mem 0xfd00000000-0xfd00003fff 64bit]
      
      The BIOS told us everything we need to know to handle this correctly,
      so we're better off if we just pay attention, which lets us leave the
      80:01.0 device at the original, working, address:
      
          ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f])
          pci_root PNP0A03:00: host bridge window [mem 0x80000000-0xff37ffff]
          ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 80-ff])
          pci_root PNP0A08:00: host bridge window [mem 0xfebfc000-0xfebfffff]
      
      This was a regression between 2.6.33 and 2.6.34.  In 2.6.33, amd_bus.c
      was used only when we found multiple HT chains.  3e3da00c, which
      enabled amd_bus.c even on systems with a single HT chain, caused this
      failure.
      
      This quirk was written by Graham.  If we ever enable "pci=use_crs" for
      machines from 2006 or earlir, this quirk should be removed.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=16007
      
      Cc: stable@kernel.org
      Reported-by: NGraham Ramsey <ramsey.graham@ntlworld.com>
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      2491762c
    • M
      x86/PCI: Add option to not assign BAR's if not already assigned · 7bd1c365
      Mike Habeck 提交于
      The Linux kernel assigns BARs that a BIOS did not assign, most likely
      to handle broken BIOSes that didn't enumerate the devices correctly.
      On UV the BIOS purposely doesn't assign I/O BARs for certain devices/
      drivers we know don't use them (examples, LSI SAS, Qlogic FC, ...).
      We purposely don't assign these I/O BARs because I/O Space is a very
      limited resource.  There is only 64k of I/O Space, and in a PCIe
      topology that space gets divided up into 4k chucks (this is due to
      the fact that a pci-to-pci bridge's I/O decoder is aligned at 4k)...
      Thus a system can have at most 16 cards with I/O BARs: (64k / 4k = 16)
      
      SGI needs to scale to >16 devices with I/O BARs.  So by not assigning
      I/O BARs on devices we know don't use them, we can do that (iff the
      kernel doesn't go and assign these BARs that the BIOS purposely didn't
      assign).
      
      This patch will not assign a resource to a device BAR if that BAR was
      not assigned by the BIOS, and the kernel cmdline option 'pci=nobar'
      was specified.   This patch is closely modeled after the 'pci=norom'
      option that currently exists in the tree.
      Signed-off-by: NMike Habeck <habeck@sgi.com>
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      7bd1c365
    • J
      x86/PCI: pci, fix section mismatch · 73cd3b43
      Jiri Slaby 提交于
      pcibios_scan_specific_bus calls pci_scan_bus_on_node which is
      __devinit. Mark pcibios_scan_specific_bus __devinit as well since
      all users are now __init or __devinit.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      73cd3b43
  11. 17 7月, 2010 2 次提交
    • J
      x86, pci, mrst: Add extra sanity check in walking the PCI extended cap chain · f82c3d71
      Jacob Pan 提交于
      The fixed bar capability structure is searched in PCI extended
      configuration space.  We need to make sure there is a valid capability
      ID to begin with otherwise, the search code may stuck in a infinite
      loop which results in boot hang.  This patch adds additional check for
      cap ID 0, which is also invalid, and indicates end of chain.
      
      End of chain is supposed to have all fields zero, but that doesn't
      seem to always be the case in the field.
      Suggested-by: N"H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com>
      Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      LKML-Reference: <1279306706-27087-1-git-send-email-jacob.jun.pan@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      f82c3d71
    • B
      PCI: fall back to original BIOS BAR addresses · 58c84eda
      Bjorn Helgaas 提交于
      If we fail to assign resources to a PCI BAR, this patch makes us try the
      original address from BIOS rather than leaving it disabled.
      
      Linux tries to make sure all PCI device BARs are inside the upstream
      PCI host bridge or P2P bridge apertures, reassigning BARs if necessary.
      Windows does similar reassignment.
      
      Before this patch, if we could not move a BAR into an aperture, we left
      the resource unassigned, i.e., at address zero.  Windows leaves such BARs
      at the original BIOS addresses, and this patch makes Linux do the same.
      
      This is a bit ugly because we disable the resource long before we try to
      reassign it, so we have to keep track of the BIOS BAR address somewhere.
      For lack of a better place, I put it in the struct pci_dev.
      
      I think it would be cleaner to attempt the assignment immediately when the
      claim fails, so we could easily remember the original address.  But we
      currently claim motherboard resources in the middle, after attempting to
      claim PCI resources and before assigning new PCI resources, and changing
      that is a fairly big job.
      
      Addresses https://bugzilla.kernel.org/show_bug.cgi?id=16263Reported-by: NAndrew <nitr0@seti.kr.ua>
      Tested-by: NAndrew <nitr0@seti.kr.ua>
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      58c84eda
  12. 12 6月, 2010 1 次提交
  13. 25 5月, 2010 1 次提交
  14. 22 5月, 2010 1 次提交
  15. 19 5月, 2010 1 次提交
  16. 17 5月, 2010 1 次提交
  17. 15 5月, 2010 1 次提交
  18. 12 5月, 2010 2 次提交
  19. 10 5月, 2010 3 次提交
  20. 29 4月, 2010 1 次提交
  21. 27 4月, 2010 1 次提交
  22. 23 4月, 2010 1 次提交