1. 11 12月, 2020 1 次提交
  2. 09 12月, 2020 1 次提交
  3. 05 12月, 2020 4 次提交
  4. 01 12月, 2020 3 次提交
  5. 21 11月, 2020 1 次提交
  6. 31 10月, 2020 1 次提交
  7. 01 10月, 2020 3 次提交
  8. 30 9月, 2020 1 次提交
  9. 18 9月, 2020 2 次提交
    • L
      PCI: Simplify pci_dev_reset_slot_function() · 10791141
      Lukas Wunner 提交于
      pci_dev_reset_slot_function() refuses to reset a hotplug slot if it is
      shared by multiple pci_devs.  That's the case if and only if the slot is
      occupied by a multifunction device.
      
      Simplify the function to check the device's multifunction flag instead
      of iterating over the devices on the bus.  (Iterating over the devices
      requires holding pci_bus_sem, which the function erroneously does not
      acquire.)
      
      Link: https://lore.kernel.org/r/c6aab5af096f7b1b3db57f6335cebba8f0fcca89.1595330431.git.lukas@wunner.deSigned-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      10791141
    • L
      PCI: pciehp: Reduce noisiness on hot removal · 8a614499
      Lukas Wunner 提交于
      When a PCIe card is hot-removed, the Presence Detect State and Data Link
      Layer Link Active bits often do not clear simultaneously.  I've seen delays
      of up to 244 msec between the two events with Thunderbolt.
      
      After pciehp has brought down the slot in response to the first event, the
      other bit may still be set.  It's not discernible whether it's set because
      a new card is already in the slot or if it will soon clear.  So pciehp
      tries to bring up the slot and in the latter case fails with a bunch of
      messages, some of them at KERN_ERR severity.  If the slot is no longer
      occupied, the messages are false positives and annoy users.
      
      Stuart Hayes reports the following splat on hot removal:
      
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Slot(180): Link Up
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Timeout waiting for Presence Detect
        KERN_ERR  pcieport 0000:3c:06.0: pciehp: link training error: status 0x0001
        KERN_ERR  pcieport 0000:3c:06.0: pciehp: Failed to check link status
      
      Dongdong Liu complains about a similar splat:
      
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): Link Down
        KERN_INFO iommu: Removing device 0000:87:00.0 from group 12
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): Card present
        KERN_INFO pcieport 0000:80:10.0: Data Link Layer Link Active not set in 1000 msec
        KERN_ERR  pciehp 0000:80:10.0:pcie004: Failed to check link status
      
      Users are particularly irritated to see a bringup attempt even though the
      slot was explicitly brought down via sysfs.  In a perfect world, we could
      avoid this by setting Link Disable on slot bringdown and re-enabling it
      upon a Presence Detect State change.  In reality however, there are broken
      hotplug ports which hardwire Presence Detect to zero, see 80696f99
      ("PCI: pciehp: Tolerate Presence Detect hardwired to zero").  Conversely,
      PCIe r1.0 hotplug ports hardwire Link Active to zero because Link Active
      Reporting wasn't specified before PCIe r1.1.  On unplug, some ports first
      clear Presence then Link (see Stuart Hayes' splat) whereas others use the
      inverse order (see Dongdong Liu's splat).  To top it off, there are hotplug
      ports which flap the Presence and Link bits on slot bringup, see
      6c35a1ac ("PCI: pciehp: Tolerate initially unstable link").
      
      pciehp is designed to work with all of these variants.  Surplus attempts at
      slot bringup are a lesser evil than not being able to bring up slots at
      all.  Although we could try to perfect the behavior for specific hotplug
      controllers, we'd risk breaking others or increasing code complexity.
      
      But we can certainly minimize annoyance by emitting only a single message
      with KERN_INFO severity if bringup is unsuccessful:
      
      * Drop the "Timeout waiting for Presence Detect" message in
        pcie_wait_for_presence().  The sole caller of that function,
        pciehp_check_link_status(), ignores the timeout and carries on.  It emits
        error messages of its own and I don't think this particular message adds
        much value.
      
      * There's a single error condition in pciehp_check_link_status() which
        does not emit a message.  Adding one allows dropping the "Failed to check
        link status" message emitted by board_added() if
        pciehp_check_link_status() returns a non-zero integer.
      
      * Tone down all messages in pciehp_check_link_status() to KERN_INFO
        severity and rephrase them to look as innocuous as possible.  To this
        end, move the message emitted by pcie_wait_for_link_delay() to its
        callers.
      
      As a result, Stuart Hayes' splat becomes:
      
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Slot(180): Link Up
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Slot(180): Cannot train link: status 0x0001
      
      Dongdong Liu's splat becomes:
      
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): Card present
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): No link
      
      The messages now merely serve as information that presence or link bits
      were set a little longer than expected.  Bringup failures which are not
      false positives are still reported, albeit no longer at KERN_ERR severity.
      
      Link: https://lore.kernel.org/linux-pci/20200310182100.102987-1-stuart.w.hayes@gmail.com/
      Link: https://lore.kernel.org/linux-pci/1547649064-19019-1-git-send-email-liudongdong3@huawei.com/
      Link: https://lore.kernel.org/r/b45e46fd8a6aa6930aaac9d7718c2e4b787a4e5e.1595935071.git.lukas@wunner.deReported-by: NStuart Hayes <stuart.w.hayes@gmail.com>
      Reported-by: NDongdong Liu <liudongdong3@huawei.com>
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      8a614499
  10. 17 9月, 2020 1 次提交
    • R
      PCI/ACS: Enable Translation Blocking for external devices · 76fc8e85
      Rajat Jain 提交于
      Translation Blocking is a required feature for Downstream Ports (Root
      Ports or Switch Downstream Ports) that implement ACS.  When enabled, the
      Port checks the Address Type (AT) of each upstream Memory Request it
      receives.
      
      The default AT (00b) means "untranslated" and the IOMMU can decide whether
      to treat the address as I/O virtual or physical.
      
      If AT is not the default, i.e., if the Memory Request contains an
      already-translated (physical) address, the Port blocks the request and
      reports an ACS error.
      
      When enabling ACS, enable Translation Blocking for external-facing ports
      and untrusted (external) devices.  This is to help prevent attacks from
      external devices that initiate DMA with physical addresses that bypass the
      IOMMU.
      
      [bhelgaas: commit log, simplify setting bit and drop warning; TB is
      required for Downstream Ports with ACS, so we should never see the warning]
      Link: https://lore.kernel.org/r/20200707224604.3737893-4-rajatja@google.comSigned-off-by: NRajat Jain <rajatja@google.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      76fc8e85
  11. 02 9月, 2020 1 次提交
  12. 01 9月, 2020 1 次提交
  13. 24 8月, 2020 1 次提交
  14. 23 7月, 2020 1 次提交
  15. 22 7月, 2020 1 次提交
  16. 11 7月, 2020 2 次提交
  17. 27 6月, 2020 1 次提交
    • B
      PCI: Convert PCIe capability PCIBIOS errors to errno · d20df83b
      Bolarinwa Olayemi Saheed 提交于
      The PCI config accessors (pci_read_config_word(), et al) return
      PCIBIOS_SUCCESSFUL (zero) or positive error values like
      PCIBIOS_FUNC_NOT_SUPPORTED.
      
      The PCIe capability accessors (pcie_capability_read_word(), et al)
      similarly return PCIBIOS errors, but some callers assume they return
      generic errno values like -EINVAL.
      
      For example, the Myri-10G probe function returns a positive PCIBIOS error
      if the pcie_capability_clear_and_set_word() in pcie_set_readrq() fails:
      
        myri10ge_probe
          status = pcie_set_readrq
            return pcie_capability_clear_and_set_word
          if (status)
            return status
      
      A positive return from a PCI driver probe function would cause a "Driver
      probe function unexpectedly returned" warning from local_pci_probe()
      instead of the desired probe failure.
      
      Convert PCIBIOS errors to generic errno for all callers of:
      
        pcie_capability_read_word
        pcie_capability_read_dword
        pcie_capability_write_word
        pcie_capability_write_dword
        pcie_capability_set_word
        pcie_capability_set_dword
        pcie_capability_clear_word
        pcie_capability_clear_dword
        pcie_capability_clear_and_set_word
        pcie_capability_clear_and_set_dword
      
      that check the return code for anything other than zero.
      
      [bhelgaas: commit log, squash together]
      Suggested-by: NBjorn Helgaas <bjorn@helgaas.com>
      Link: https://lore.kernel.org/r/20200615073225.24061-1-refactormyself@gmail.comSigned-off-by: NBolarinwa Olayemi Saheed <refactormyself@gmail.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      d20df83b
  18. 16 5月, 2020 2 次提交
    • M
      PCI/PM: Assume ports without DLL Link Active train links in 100 ms · ec411e02
      Mika Westerberg 提交于
      Kai-Heng Feng reported that it takes a long time (> 1 s) to resume
      Thunderbolt-connected devices from both runtime suspend and system sleep
      (s2idle).
      
      This was because some Downstream Ports that support > 5 GT/s do not also
      support Data Link Layer Link Active reporting.  Per PCIe r5.0 sec 6.6.1:
      
        With a Downstream Port that supports Link speeds greater than 5.0 GT/s,
        software must wait a minimum of 100 ms after Link training completes
        before sending a Configuration Request to the device immediately below
        that Port. Software can determine when Link training completes by polling
        the Data Link Layer Link Active bit or by setting up an associated
        interrupt (see Section 6.7.3.3).
      
      Sec 7.5.3.6 requires such Ports to support DLL Link Active reporting, but
      at least the Intel JHL6240 Thunderbolt 3 Bridge [8086:15c0] and the Intel
      JHL7540 Thunderbolt 3 Bridge [8086:15ea] do not.
      
      Previously we tried to wait for Link training to complete, but since there
      was no DLL Link Active reporting, all we could do was wait the worst-case
      1000 ms, then another 100 ms.
      
      Instead of using the supported speeds to determine whether to wait for Link
      training, check whether the port supports DLL Link Active reporting.  The
      Ports in question do not, so we'll wait only the 100 ms required for Ports
      that support Link speeds <= 5 GT/s.
      
      This of course assumes these Ports always train the Link within 100 ms even
      if they are operating at > 5 GT/s, which is not required by the spec.
      
      [bhelgaas: commit log, comment]
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
      Link: https://lore.kernel.org/r/20200514133043.27429-1-mika.westerberg@linux.intel.comReported-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Tested-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      ec411e02
    • B
      PCI/PM: Adjust pcie_wait_for_link_delay() for caller delay · f044baaf
      Bjorn Helgaas 提交于
      The caller of pcie_wait_for_link_delay() specifies the time to wait after
      the link becomes active.  When the downstream port doesn't support link
      active reporting, obviously we can't tell when the link becomes active, so
      we waited the worst-case time (1000 ms) plus 100 ms, ignoring the delay
      from the caller.
      
      Instead, wait for 1000 ms + the delay from the caller.
      
      Fixes: 4827d638 ("PCI/PM: Add pcie_wait_for_link_delay()")
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      f044baaf
  19. 15 5月, 2020 1 次提交
  20. 12 5月, 2020 1 次提交
    • G
      PCI: Replace zero-length array with flexible-array · 914a1951
      Gustavo A. R. Silva 提交于
      The current codebase makes use of the zero-length array language extension
      to the C90 standard, but the preferred mechanism to declare variable-length
      types such as these as a flexible array member [1][2], introduced in C99:
      
        struct foo {
          int stuff;
          struct boo array[];
        };
      
      By making use of the mechanism above, we will get a compiler warning in
      case the flexible array does not occur last in the structure, which will
      help us prevent some kind of undefined behavior bugs from being
      inadvertently introduced[3] to the codebase from now on.
      
      Also, notice that dynamic memory allocations won't be affected by this
      change:
      
        Flexible array members have incomplete type, and so the sizeof operator
        may not be applied. As a quirk of the original implementation of
        zero-length arrays, sizeof evaluates to zero. [1]
      
      sizeof(flexible-array-member) triggers a warning because flexible array
      members have incomplete type [1]. There are some instances of code in which
      the sizeof() operator is being incorrectly/erroneously applied to
      zero-length arrays, and the result is zero. Such instances may be hiding
      some bugs. So, this work (flexible-array member conversions) will also help
      to get completely rid of those sorts of issues.
      
      This issue was found with the help of Coccinelle.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      
      Link: https://lore.kernel.org/r/20200507190544.GA15633@embeddedorSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      914a1951
  21. 25 4月, 2020 1 次提交
  22. 29 3月, 2020 1 次提交
  23. 11 3月, 2020 2 次提交
    • Y
      PCI: Add PCIE_LNKCAP2_SLS2SPEED() macro · 757bfaa2
      Yicong Yang 提交于
      Add PCIE_LNKCAP2_SLS2SPEED macro for transforming raw Link Capabilities 2
      values to the pci_bus_speed. This is next to PCIE_SPEED2MBS_ENC() to make
      it easier to update both places when adding support for new speeds.
      
      Link: https://lore.kernel.org/r/1581937984-40353-10-git-send-email-yangyicong@hisilicon.comSigned-off-by: NYicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      757bfaa2
    • B
      PCI: Use pci_speed_string() for all PCI/PCI-X/PCIe strings · 6348a34d
      Bjorn Helgaas 提交于
      Previously some PCI speed strings came from pci_speed_string(), some came
      from the PCIe-specific PCIE_SPEED2STR(), and some came from a PCIe-specific
      switch statement.  These methods were inconsistent:
      
        pci_speed_string()     PCIE_SPEED2STR()     switch
        ------------------     ----------------     ------
        33 MHz PCI
        ...
        2.5 GT/s PCIe          2.5 GT/s             2.5 GT/s
        5.0 GT/s PCIe          5 GT/s               5 GT/s
        8.0 GT/s PCIe          8 GT/s               8 GT/s
        16.0 GT/s PCIe         16 GT/s              16 GT/s
        32.0 GT/s PCIe         32 GT/s              32 GT/s
      
      Standardize on pci_speed_string() as the single source of these strings.
      
      Note that this adds ".0" and "PCIe" to some messages, including sysfs
      "max_link_speed" files, a brcmstb "link up" message, and the link status
      dmesg logging, e.g.,
      
        nvme 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:01.1 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
      
      I think it's better to standardize on a single version of the speed text.
      Previously we had strings like this:
      
        /sys/bus/pci/slots/0/cur_bus_speed: 8.0 GT/s PCIe
        /sys/bus/pci/slots/0/max_bus_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8 GT/s
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8 GT/s
      
      This changes the latter two to match the slots files:
      
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8.0 GT/s PCIe
      
      Based-on-patch by: Yicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      6348a34d
  24. 06 3月, 2020 1 次提交
  25. 05 3月, 2020 1 次提交
  26. 25 1月, 2020 1 次提交
  27. 14 1月, 2020 1 次提交
  28. 06 1月, 2020 1 次提交
  29. 23 12月, 2019 1 次提交