1. 30 9月, 2020 1 次提交
  2. 23 7月, 2020 1 次提交
  3. 22 7月, 2020 1 次提交
  4. 11 7月, 2020 2 次提交
  5. 27 6月, 2020 1 次提交
    • B
      PCI: Convert PCIe capability PCIBIOS errors to errno · d20df83b
      Bolarinwa Olayemi Saheed 提交于
      The PCI config accessors (pci_read_config_word(), et al) return
      PCIBIOS_SUCCESSFUL (zero) or positive error values like
      PCIBIOS_FUNC_NOT_SUPPORTED.
      
      The PCIe capability accessors (pcie_capability_read_word(), et al)
      similarly return PCIBIOS errors, but some callers assume they return
      generic errno values like -EINVAL.
      
      For example, the Myri-10G probe function returns a positive PCIBIOS error
      if the pcie_capability_clear_and_set_word() in pcie_set_readrq() fails:
      
        myri10ge_probe
          status = pcie_set_readrq
            return pcie_capability_clear_and_set_word
          if (status)
            return status
      
      A positive return from a PCI driver probe function would cause a "Driver
      probe function unexpectedly returned" warning from local_pci_probe()
      instead of the desired probe failure.
      
      Convert PCIBIOS errors to generic errno for all callers of:
      
        pcie_capability_read_word
        pcie_capability_read_dword
        pcie_capability_write_word
        pcie_capability_write_dword
        pcie_capability_set_word
        pcie_capability_set_dword
        pcie_capability_clear_word
        pcie_capability_clear_dword
        pcie_capability_clear_and_set_word
        pcie_capability_clear_and_set_dword
      
      that check the return code for anything other than zero.
      
      [bhelgaas: commit log, squash together]
      Suggested-by: NBjorn Helgaas <bjorn@helgaas.com>
      Link: https://lore.kernel.org/r/20200615073225.24061-1-refactormyself@gmail.comSigned-off-by: NBolarinwa Olayemi Saheed <refactormyself@gmail.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      d20df83b
  6. 16 5月, 2020 2 次提交
    • M
      PCI/PM: Assume ports without DLL Link Active train links in 100 ms · ec411e02
      Mika Westerberg 提交于
      Kai-Heng Feng reported that it takes a long time (> 1 s) to resume
      Thunderbolt-connected devices from both runtime suspend and system sleep
      (s2idle).
      
      This was because some Downstream Ports that support > 5 GT/s do not also
      support Data Link Layer Link Active reporting.  Per PCIe r5.0 sec 6.6.1:
      
        With a Downstream Port that supports Link speeds greater than 5.0 GT/s,
        software must wait a minimum of 100 ms after Link training completes
        before sending a Configuration Request to the device immediately below
        that Port. Software can determine when Link training completes by polling
        the Data Link Layer Link Active bit or by setting up an associated
        interrupt (see Section 6.7.3.3).
      
      Sec 7.5.3.6 requires such Ports to support DLL Link Active reporting, but
      at least the Intel JHL6240 Thunderbolt 3 Bridge [8086:15c0] and the Intel
      JHL7540 Thunderbolt 3 Bridge [8086:15ea] do not.
      
      Previously we tried to wait for Link training to complete, but since there
      was no DLL Link Active reporting, all we could do was wait the worst-case
      1000 ms, then another 100 ms.
      
      Instead of using the supported speeds to determine whether to wait for Link
      training, check whether the port supports DLL Link Active reporting.  The
      Ports in question do not, so we'll wait only the 100 ms required for Ports
      that support Link speeds <= 5 GT/s.
      
      This of course assumes these Ports always train the Link within 100 ms even
      if they are operating at > 5 GT/s, which is not required by the spec.
      
      [bhelgaas: commit log, comment]
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
      Link: https://lore.kernel.org/r/20200514133043.27429-1-mika.westerberg@linux.intel.comReported-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Tested-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      ec411e02
    • B
      PCI/PM: Adjust pcie_wait_for_link_delay() for caller delay · f044baaf
      Bjorn Helgaas 提交于
      The caller of pcie_wait_for_link_delay() specifies the time to wait after
      the link becomes active.  When the downstream port doesn't support link
      active reporting, obviously we can't tell when the link becomes active, so
      we waited the worst-case time (1000 ms) plus 100 ms, ignoring the delay
      from the caller.
      
      Instead, wait for 1000 ms + the delay from the caller.
      
      Fixes: 4827d638 ("PCI/PM: Add pcie_wait_for_link_delay()")
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      f044baaf
  7. 15 5月, 2020 1 次提交
  8. 12 5月, 2020 1 次提交
    • G
      PCI: Replace zero-length array with flexible-array · 914a1951
      Gustavo A. R. Silva 提交于
      The current codebase makes use of the zero-length array language extension
      to the C90 standard, but the preferred mechanism to declare variable-length
      types such as these as a flexible array member [1][2], introduced in C99:
      
        struct foo {
          int stuff;
          struct boo array[];
        };
      
      By making use of the mechanism above, we will get a compiler warning in
      case the flexible array does not occur last in the structure, which will
      help us prevent some kind of undefined behavior bugs from being
      inadvertently introduced[3] to the codebase from now on.
      
      Also, notice that dynamic memory allocations won't be affected by this
      change:
      
        Flexible array members have incomplete type, and so the sizeof operator
        may not be applied. As a quirk of the original implementation of
        zero-length arrays, sizeof evaluates to zero. [1]
      
      sizeof(flexible-array-member) triggers a warning because flexible array
      members have incomplete type [1]. There are some instances of code in which
      the sizeof() operator is being incorrectly/erroneously applied to
      zero-length arrays, and the result is zero. Such instances may be hiding
      some bugs. So, this work (flexible-array member conversions) will also help
      to get completely rid of those sorts of issues.
      
      This issue was found with the help of Coccinelle.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      
      Link: https://lore.kernel.org/r/20200507190544.GA15633@embeddedorSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      914a1951
  9. 25 4月, 2020 1 次提交
  10. 29 3月, 2020 1 次提交
  11. 11 3月, 2020 2 次提交
    • Y
      PCI: Add PCIE_LNKCAP2_SLS2SPEED() macro · 757bfaa2
      Yicong Yang 提交于
      Add PCIE_LNKCAP2_SLS2SPEED macro for transforming raw Link Capabilities 2
      values to the pci_bus_speed. This is next to PCIE_SPEED2MBS_ENC() to make
      it easier to update both places when adding support for new speeds.
      
      Link: https://lore.kernel.org/r/1581937984-40353-10-git-send-email-yangyicong@hisilicon.comSigned-off-by: NYicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      757bfaa2
    • B
      PCI: Use pci_speed_string() for all PCI/PCI-X/PCIe strings · 6348a34d
      Bjorn Helgaas 提交于
      Previously some PCI speed strings came from pci_speed_string(), some came
      from the PCIe-specific PCIE_SPEED2STR(), and some came from a PCIe-specific
      switch statement.  These methods were inconsistent:
      
        pci_speed_string()     PCIE_SPEED2STR()     switch
        ------------------     ----------------     ------
        33 MHz PCI
        ...
        2.5 GT/s PCIe          2.5 GT/s             2.5 GT/s
        5.0 GT/s PCIe          5 GT/s               5 GT/s
        8.0 GT/s PCIe          8 GT/s               8 GT/s
        16.0 GT/s PCIe         16 GT/s              16 GT/s
        32.0 GT/s PCIe         32 GT/s              32 GT/s
      
      Standardize on pci_speed_string() as the single source of these strings.
      
      Note that this adds ".0" and "PCIe" to some messages, including sysfs
      "max_link_speed" files, a brcmstb "link up" message, and the link status
      dmesg logging, e.g.,
      
        nvme 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:01.1 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
      
      I think it's better to standardize on a single version of the speed text.
      Previously we had strings like this:
      
        /sys/bus/pci/slots/0/cur_bus_speed: 8.0 GT/s PCIe
        /sys/bus/pci/slots/0/max_bus_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8 GT/s
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8 GT/s
      
      This changes the latter two to match the slots files:
      
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8.0 GT/s PCIe
      
      Based-on-patch by: Yicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      6348a34d
  12. 06 3月, 2020 1 次提交
  13. 05 3月, 2020 1 次提交
  14. 25 1月, 2020 1 次提交
  15. 14 1月, 2020 1 次提交
  16. 06 1月, 2020 1 次提交
  17. 23 12月, 2019 1 次提交
  18. 19 12月, 2019 2 次提交
  19. 21 11月, 2019 14 次提交
  20. 23 10月, 2019 1 次提交
  21. 21 10月, 2019 1 次提交
  22. 19 10月, 2019 1 次提交
  23. 16 10月, 2019 1 次提交