1. 11 12月, 2020 1 次提交
  2. 04 8月, 2020 1 次提交
  3. 23 7月, 2020 2 次提交
  4. 10 7月, 2020 1 次提交
  5. 13 12月, 2019 1 次提交
  6. 21 11月, 2019 1 次提交
  7. 29 10月, 2019 1 次提交
  8. 25 10月, 2019 1 次提交
  9. 09 7月, 2019 1 次提交
  10. 07 5月, 2019 1 次提交
    • S
      PCI: iproc: Add sorted dma ranges resource entries to host bridge · 90199c95
      Srinath Mannam 提交于
      The iProc host controller allows only a subset of physical address space as
      target of inbound PCI memory transaction addresses.
      
      PCI device memory transactions targeting memory regions that are not
      allowed for inbound transactions in the host controller are rejected by the
      host controller and cannot reach the upstream buses.
      
      The firmware device tree description defines the DMA ranges that are
      addressable by devices DMA transactions; parse the device tree dma-ranges
      property and add its ranges to the PCI host bridge dma_ranges list; the
      iova_reserve_pci_windows() call executed at iommu_dma_init_domain() will
      reserve the IOVA address ranges that are not addressable (ie memory holes
      in the dma-ranges set) so that they are not allocated to PCI devices for
      DMA transfers.
      
      All allowed address ranges are listed in the dma-ranges DT parameter.  For
      example:
      
        dma-ranges = < \
          0x43000000 0x00 0x80000000 0x00 0x80000000 0x00 0x80000000 \
          0x43000000 0x08 0x00000000 0x08 0x00000000 0x08 0x00000000 \
          0x43000000 0x80 0x00000000 0x80 0x00000000 0x40 0x00000000>
      
      In the above example of dma-ranges, memory address from
      
        0x0 - 0x80000000,
        0x100000000 - 0x800000000,
        0x1000000000 - 0x8000000000 and
        0x10000000000 - 0xffffffffffffffff.
      
      are not allowed to be used as inbound addresses.
      Based-on-a-patch-by: NOza Pawandeep <oza.oza@broadcom.com>
      Signed-off-by: NSrinath Mannam <srinath.mannam@broadcom.com>
      [lorenzo.pieralisi@arm.com: updated commit log]
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      [bhelgaas: fix function prototype style]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NOza Pawandeep <poza@codeaurora.org>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      90199c95
  11. 30 4月, 2019 1 次提交
  12. 03 4月, 2019 2 次提交
    • S
      PCI: iproc: Allow outbound configuration for 32-bit I/O region · ea2df11f
      Srinath Mannam 提交于
      The IProc host controller has I/O memory windows allocated in
      the AXI memory map that can be used to address PCI I/O memory
      space.
      
      Mapping from AXI memory windows to PCI outbound memory windows is
      carried out in the host controller through OARR/OMAP registers pairs
      that permit to define power of two region size AXI<->PCI mappings, the
      smallest of which is 128MB.
      
      Current code enables AXI memory window to PCI outbound memory window
      mapping only for AXI windows matching one of the OARR/OMAP window sizes,
      that are SoC dependent and the smallest of which is 128MB.
      
      Some SoCs implementing the IProc host controller have a 32-bit AXI
      memory window into PCI I/O memory space, eg:
      
          Base address | Size
      -----------------------------
      (1) 0x42000000   | 0x2000000
      (2) 0x400000000  | 0x80000000
      
      but its size (32MB - (1) above) is smaller than the smallest AXI<->PCI
      region size provided by OARR (128MB), so the current driver rejects
      mappings for the 32-bit region making the IProc host controller driver
      unusable on 32-bit systems.
      
      However, there is no reason why the 32-bit I/O memory window cannot be
      enabled by mapping it through an OARR/OMAP region bigger in size (ie
      32-bit AXI window size is 32MB but can be mapped using a 128MB OARR/OMAP
      region).
      
      Allow outbound window configuration of I/O memory windows that
      are smaller in size than the host controller OARR/OMAP region, so
      that the 32-bit AXI memory window can actually be enabled,
      making the IProc host controller operational on 32-bit systems.
      
      Link: https://lore.kernel.org/linux-pci/1551415936-30174-3-git-send-email-srinath.mannam@broadcom.com/Signed-off-by: NSrinath Mannam <srinath.mannam@broadcom.com>
      Signed-off-by: NAbhishek Shah <abhishek.shah@broadcom.com>
      Signed-off-by: NRay Jui <ray.jui@broadcom.com>
      [lorenzo.pieralisi@arm.com: rewrote the commit log]
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NScott Branden <scott.branden@broadcom.com>
      ea2df11f
    • S
      PCI: iproc: Add CRS check in config read · 73b9e4d3
      Srinath Mannam 提交于
      The IPROC PCIe host controller implementation returns CFG_RETRY_STATUS
      (0xffff0001) data when it receives a CRS completion, regardless of the
      address of the read or the CRS Software Visibility Enable bit. As a
      workaround the driver retries in software any read that returns
      CFG_RETRY_STATUS even though, for reads of registers that are not Vendor
      ID, the register value can correspond to CFG_RETRY_STATUS; this
      situation would cause a timeout and failure of reading a valid register
      value.
      
      IPROC PCIe host controller PAXB v2 has a register to show config read
      status flags like SC, UR, CRS and CA. Using this status flag,
      an extra check is added to confirm the CRS using status flags before
      reissuing a config read, fixing the issue.
      Signed-off-by: NSrinath Mannam <srinath.mannam@broadcom.com>
      [lorenzo.pieralisi@arm.com: rewrote commit log]
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NScott Branden <scott.branden@broadcom.com>
      73b9e4d3
  13. 01 4月, 2019 1 次提交
    • W
      PCI: iproc: Fix a leaked reference by adding missing of_node_put() · 8956388d
      Wen Yang 提交于
      The call to of_parse_phandle() returns a node pointer with refcount
      incremented thus it must be explicitly decremented after the last
      usage.
      
      iproc_msi_init() also calls of_node_get() to increase refcount:
      
      proc_msi_init()
       -> iproc_msi_alloc_domains()
        -> pci_msi_create_irq_domain()
         -> msi_create_irq_domain()
          -> irq_domain_create_linear()
           -> __irq_domain_add()
      
      so irq_domain will not be affected when it is released.
      
      Detected by coccinelle with the following warnings:
        ./drivers/pci/controller/pcie-iproc.c:1323:3-9: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 1299, but without a corresponding object release within this function.
        ./drivers/pci/controller/pcie-iproc.c:1330:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 1299, but without a corresponding object release within this function.
      Signed-off-by: NWen Yang <wen.yang99@zte.com.cn>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Ray Jui <rjui@broadcom.com>
      Cc: Scott Branden <sbranden@broadcom.com>
      Cc: bcm-kernel-feedback-list@broadcom.com
      Cc: linux-pci@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      8956388d
  14. 18 9月, 2018 1 次提交
  15. 13 7月, 2018 4 次提交
  16. 08 6月, 2018 1 次提交
  17. 21 3月, 2018 1 次提交
  18. 29 1月, 2018 1 次提交
  19. 12 1月, 2018 1 次提交
  20. 06 10月, 2017 1 次提交
  21. 06 9月, 2017 3 次提交
  22. 29 8月, 2017 2 次提交
    • O
      PCI: iproc: Work around Stingray CRS defects · 39b7a4ff
      Oza Pawandeep 提交于
      Configuration Request Retry Status ("CRS") completions are a required part
      of PCIe.  A PCIe device may respond to config a request with a CRS
      completion to indicate that it needs more time to initialize.  A Root Port
      that receives a CRS completion may automatically retry the request, or it
      may treat the request as a failed transaction.  For a failed read, it will
      likely synthesize all 1's data, i.e., 0xffffffff, to complete the read to
      the CPU.
      
      CRS Software Visibility ("CRS SV") is an optional feature.  Per PCIe r3.1,
      sec 2.3.2, if supported and enabled, a Root Port that receives a CRS
      completion for a config read of the Vendor ID will synthesize 0x0001 data
      (an invalid Vendor ID) instead of retrying or failing the transaction.  The
      0x0001 data makes the CRS completion visible to software, so it can perform
      other tasks while waiting for the device.
      
      The iProc "Stingray" PCIe controller does not support CRS completions
      correctly.  From the Stingray PCIe Controller spec:
      
        4.7.3.3. Retry Status On Configuration Cycle
      
        Endpoints are allowed to generate retry status on configuration cycles.
        In this case, the RC needs to re-issue the request. The IP does not
        handle this because the number of configuration cycles needed will
        probably be less than the total number of non-posted operations needed.
      
        When a retry status is received on the User RX interface for a
        configuration request that was sent on the User TX interface, it will be
        indicated with a completion with the CMPL_STATUS field set to 2=CRS, and
        the user will have to find the address and data values and send a new
        transaction on the User TX interface.  When the internal configuration
        space returns a retry status during a configuration cycle (user_cscfg =
        1) on the Command/Status interface, the pcie_cscrs will assert with the
        pcie_csack signal to indicate the CRS status.
      
        When the CRS Software Visibility Enable register in the Root Control
        register is enabled, the IP will return the data value to 0x0001 for the
        Vendor ID value and 0xffff  (all 1’s) for the rest of the data in the
        request for reads of offset 0 that return with CRS status.  This is true
        for both the User RX Interface and for the Command/Status interface.
        When CRS Software Visibility is enabled, the CMPL_STATUS field of the
        completion on the User RX Interface will not be 2=CRS and the pcie_cscrs
        signal will not assert on the Command/Status interface.
      
      The Stingray hardware never reissues configuration requests when it
      receives CRS completions.  Contrary to what sec 4.7.3.3 above says, when it
      receives a CRS completion, it synthesizes 0xffff0001 data regardless of the
      address of the read or the value of the CRS SV enable bit.
      
      This is broken in two ways:
      
        1) When CRS SV is disabled, the Root Port should never synthesize the
        0x0001 value.  If it receives a CRS completion, it should fail the
        transaction and synthesize all 1's data.
      
        2) When CRS SV is enabled, the Root Port should only synthesize 0x0001
        data if it receives a CRS completion for a read of the Vendor ID.  If it
        receives a CRS completion for any other read, it should fail the
        transaction and synthesize all 1's data.
      
      This breaks pci_flr_wait(), which reads the Command register and expects to
      see all 1's data if the read fails because of CRS completions.  On
      Stingray, it sees the incorrect 0xffff0001 data instead.
      
      It also breaks config registers that contain the 0xffff0001 value.  If we
      read such a register, software can't distinguish a CRS completion from the
      actual value read from the device.
      
      On Stingray, if we read 0xffff0001 data, assume this indicates a CRS
      completion and retry the read for 500ms.  If we time out, return all 1's
      (0xffffffff) data.  Note that this corrupts registers that happen to
      contain 0xffff0001.
      
      Stingray advertises CRS SV support in its Root Capabilities register, and
      the CRS SV enable bit is writable (even though the hardware ignores it).
      Mask out PCI_EXP_RTCAP_CRSVIS so software doesn't try to use CRS SV.
      Signed-off-by: NOza Pawandeep <oza.oza@broadcom.com>
      [bhelgaas: changelog, add probe-time warning about corruption, don't
      advertise CRS SV support, remove duplicate pci_generic_config_read32(),
      fix alignment based on patch from Arnd Bergmann <arnd@arndb.de>]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      39b7a4ff
    • O
      PCI: iproc: Factor out memory-mapped config access address calculation · d005045b
      Oza Pawandeep 提交于
      Factor out the address calculation for memory-mapped config accesses as a
      separate function.  No functional change intended.
      Signed-off-by: NOza Pawandeep <oza.oza@broadcom.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      d005045b
  23. 03 7月, 2017 2 次提交
  24. 29 6月, 2017 1 次提交
    • L
      PCI: iproc: Convert link check to raw PCI config accessors · 022adcfc
      Lorenzo Pieralisi 提交于
      The current iproc driver host bridge controller driver requires struct
      pci_bus to be created in order to carry out PCI link checks with standard
      PCI config space accessors.
      
      This struct pci_bus dependency is fictitious and burdens the driver with
      unneeded constraints (eg to use separate APIs to create and scan the root
      bus).
      
      Add PCI raw config space accessors to the iproc driver and remove the
      fictitious struct pci_bus dependency.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Cc: Scott Branden <sbranden@broadcom.com>
      Cc: Ray Jui <rjui@broadcom.com>
      Cc: Jon Mason <jonmason@broadcom.com>
      022adcfc
  25. 09 2月, 2017 1 次提交
  26. 09 12月, 2016 1 次提交
  27. 24 11月, 2016 2 次提交
  28. 18 11月, 2016 3 次提交