提交 299d14d4 编写于 作者: L Linus Torvalds

Merge tag 'pci-v5.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration:

   - Consolidate _HPP/_HPX stuff in pci-acpi.c and simplify it
     (Krzysztof Wilczynski)

   - Fix incorrect PCIe device types and remove dev->has_secondary_link
     to simplify code that deals with upstream/downstream ports (Mika
     Westerberg)

   - After suspend, restore Resizable BAR size bits correctly for 1MB
     BARs (Sumit Saxena)

   - Enable PCI_MSI_IRQ_DOMAIN support for RISC-V (Wesley Terpstra)

  Virtualization:

   - Add ACS quirks for iProc PAXB (Abhinav Ratna), Amazon Annapurna
     Labs (Ali Saidi)

   - Move sysfs SR-IOV functions to iov.c (Kelsey Skunberg)

   - Remove group write permissions from sysfs sriov_numvfs,
     sriov_drivers_autoprobe (Kelsey Skunberg)

  Hotplug:

   - Simplify pciehp indicator control (Denis Efremov)

  Peer-to-peer DMA:

   - Allow P2P DMA between root ports for whitelisted bridges (Logan
     Gunthorpe)

   - Whitelist some Intel host bridges for P2P DMA (Logan Gunthorpe)

   - DMA map P2P DMA requests that traverse host bridge (Logan
     Gunthorpe)

  Amazon Annapurna Labs host bridge driver:

   - Add DT binding and controller driver (Jonathan Chocron)

  Hyper-V host bridge driver:

   - Fix hv_pci_dev->pci_slot use-after-free (Dexuan Cui)

   - Fix PCI domain number collisions (Haiyang Zhang)

   - Use instance ID bytes 4 & 5 as PCI domain numbers (Haiyang Zhang)

   - Fix build errors on non-SYSFS config (Randy Dunlap)

  i.MX6 host bridge driver:

   - Limit DBI register length (Stefan Agner)

  Intel VMD host bridge driver:

   - Fix config addressing issues (Jon Derrick)

  Layerscape host bridge driver:

   - Add bar_fixed_64bit property to endpoint driver (Xiaowei Bao)

   - Add CONFIG_PCI_LAYERSCAPE_EP to build EP/RC drivers separately
     (Xiaowei Bao)

  Mediatek host bridge driver:

   - Add MT7629 controller support (Jianjun Wang)

  Mobiveil host bridge driver:

   - Fix CPU base address setup (Hou Zhiqiang)

   - Make "num-lanes" property optional (Hou Zhiqiang)

  Tegra host bridge driver:

   - Fix OF node reference leak (Nishka Dasgupta)

   - Disable MSI for root ports to work around design problem (Vidya
     Sagar)

   - Add Tegra194 DT binding and controller support (Vidya Sagar)

   - Add support for sideband pins and slot regulators (Vidya Sagar)

   - Add PIPE2UPHY support (Vidya Sagar)

  Misc:

   - Remove unused pci_block_cfg_access() et al (Kelsey Skunberg)

   - Unexport pci_bus_get(), etc (Kelsey Skunberg)

   - Hide PM, VC, link speed, ATS, ECRC, PTM constants and interfaces in
     the PCI core (Kelsey Skunberg)

   - Clean up sysfs DEVICE_ATTR() usage (Kelsey Skunberg)

   - Mark expected switch fall-through (Gustavo A. R. Silva)

   - Propagate errors for optional regulators and PHYs (Thierry Reding)

   - Fix kernel command line resource_alignment parameter issues (Logan
     Gunthorpe)"

* tag 'pci-v5.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (112 commits)
  PCI: Add pci_irq_vector() and other stubs when !CONFIG_PCI
  arm64: tegra: Add PCIe slot supply information in p2972-0000 platform
  arm64: tegra: Add configuration for PCIe C5 sideband signals
  PCI: tegra: Add support to enable slot regulators
  PCI: tegra: Add support to configure sideband pins
  PCI: vmd: Fix shadow offsets to reflect spec changes
  PCI: vmd: Fix config addressing when using bus offsets
  PCI: dwc: Add validation that PCIe core is set to correct mode
  PCI: dwc: al: Add Amazon Annapurna Labs PCIe controller driver
  dt-bindings: PCI: Add Amazon's Annapurna Labs PCIe host bridge binding
  PCI: Add quirk to disable MSI-X support for Amazon's Annapurna Labs Root Port
  PCI/VPD: Prevent VPD access for Amazon's Annapurna Labs Root Port
  PCI: Add ACS quirk for Amazon Annapurna Labs root ports
  PCI: Add Amazon's Annapurna Labs vendor ID
  MAINTAINERS: Add PCI native host/endpoint controllers designated reviewer
  PCI: hv: Use bytes 4 and 5 from instance ID as the PCI domain numbers
  dt-bindings: PCI: tegra: Add PCIe slot supplies regulator entries
  dt-bindings: PCI: tegra: Add sideband pins configuration entries
  PCI: tegra: Add Tegra194 PCIe support
  PCI: Get rid of dev->has_secondary_link flag
  ...
...@@ -3465,12 +3465,13 @@ ...@@ -3465,12 +3465,13 @@
specify the device is described above. specify the device is described above.
If <order of align> is not specified, If <order of align> is not specified,
PAGE_SIZE is used as alignment. PAGE_SIZE is used as alignment.
PCI-PCI bridge can be specified, if resource A PCI-PCI bridge can be specified if resource
windows need to be expanded. windows need to be expanded.
To specify the alignment for several To specify the alignment for several
instances of a device, the PCI vendor, instances of a device, the PCI vendor,
device, subvendor, and subdevice may be device, subvendor, and subdevice may be
specified, e.g., 4096@pci:8086:9c22:103c:198f specified, e.g., 12@pci:8086:9c22:103c:198f
for 4096-byte alignment.
ecrc= Enable/disable PCIe ECRC (transaction layer ecrc= Enable/disable PCIe ECRC (transaction layer
end-to-end CRC checking). end-to-end CRC checking).
bios: Use BIOS/firmware settings. This is the bios: Use BIOS/firmware settings. This is the
......
...@@ -11,7 +11,6 @@ Required properties: ...@@ -11,7 +11,6 @@ Required properties:
the ATU address space. the ATU address space.
(The old way of getting the configuration address space from "ranges" (The old way of getting the configuration address space from "ranges"
is deprecated and should be avoided.) is deprecated and should be avoided.)
- num-lanes: number of lanes to use
RC mode: RC mode:
- #address-cells: set to <3> - #address-cells: set to <3>
- #size-cells: set to <2> - #size-cells: set to <2>
...@@ -34,6 +33,11 @@ Optional properties: ...@@ -34,6 +33,11 @@ Optional properties:
- clock-names: Must include the following entries: - clock-names: Must include the following entries:
- "pcie" - "pcie"
- "pcie_bus" - "pcie_bus"
- snps,enable-cdm-check: This is a boolean property and if present enables
automatic checking of CDM (Configuration Dependent Module) registers
for data corruption. CDM registers include standard PCIe configuration
space registers, Port Logic registers, DMA and iATU (internal Address
Translation Unit) registers.
RC mode: RC mode:
- num-viewport: number of view ports configured in hardware. If a platform - num-viewport: number of view ports configured in hardware. If a platform
does not specify it, the driver assumes 2. does not specify it, the driver assumes 2.
......
...@@ -50,7 +50,7 @@ Additional required properties for imx7d-pcie and imx8mq-pcie: ...@@ -50,7 +50,7 @@ Additional required properties for imx7d-pcie and imx8mq-pcie:
- power-domains: Must be set to a phandle pointing to PCIE_PHY power domain - power-domains: Must be set to a phandle pointing to PCIE_PHY power domain
- resets: Must contain phandles to PCIe-related reset lines exposed by SRC - resets: Must contain phandles to PCIe-related reset lines exposed by SRC
IP block IP block
- reset-names: Must contain the following entires: - reset-names: Must contain the following entries:
- "pciephy" - "pciephy"
- "apps" - "apps"
- "turnoff" - "turnoff"
......
...@@ -6,6 +6,7 @@ Required properties: ...@@ -6,6 +6,7 @@ Required properties:
"mediatek,mt2712-pcie" "mediatek,mt2712-pcie"
"mediatek,mt7622-pcie" "mediatek,mt7622-pcie"
"mediatek,mt7623-pcie" "mediatek,mt7623-pcie"
"mediatek,mt7629-pcie"
- device_type: Must be "pci" - device_type: Must be "pci"
- reg: Base addresses and lengths of the PCIe subsys and root ports. - reg: Base addresses and lengths of the PCIe subsys and root ports.
- reg-names: Names of the above areas to use during resource lookup. - reg-names: Names of the above areas to use during resource lookup.
......
NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
This PCIe host controller is based on the Synopsis Designware PCIe IP
and thus inherits all the common properties defined in designware-pcie.txt.
Required properties:
- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
- device_type: Must be "pci"
- power-domains: A phandle to the node that controls power to the respective
PCIe controller and a specifier name for the PCIe controller. Following are
the specifiers for the different PCIe controllers
TEGRA194_POWER_DOMAIN_PCIEX8B: C0
TEGRA194_POWER_DOMAIN_PCIEX1A: C1
TEGRA194_POWER_DOMAIN_PCIEX1A: C2
TEGRA194_POWER_DOMAIN_PCIEX1A: C3
TEGRA194_POWER_DOMAIN_PCIEX4A: C4
TEGRA194_POWER_DOMAIN_PCIEX8A: C5
these specifiers are defined in
"include/dt-bindings/power/tegra194-powergate.h" file.
- reg: A list of physical base address and length pairs for each set of
controller registers. Must contain an entry for each entry in the reg-names
property.
- reg-names: Must include the following entries:
"appl": Controller's application logic registers
"config": As per the definition in designware-pcie.txt
"atu_dma": iATU and DMA registers. This is where the iATU (internal Address
Translation Unit) registers of the PCIe core are made available
for SW access.
"dbi": The aperture where root port's own configuration registers are
available
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: Must include the following entries:
"intr": The Tegra interrupt that is asserted for controller interrupts
"msi": The Tegra interrupt that is asserted when an MSI is received
- bus-range: Range of bus numbers associated with this controller
- #address-cells: Address representation for root ports (must be 3)
- cell 0 specifies the bus and device numbers of the root port:
[23:16]: bus number
[15:11]: device number
- cell 1 denotes the upper 32 address bits and should be 0
- cell 2 contains the lower 32 address bits and is used to translate to the
CPU address space
- #size-cells: Size representation for root ports (must be 2)
- ranges: Describes the translation of addresses for root ports and standard
PCI regions. The entries must be 7 cells each, where the first three cells
correspond to the address as described for the #address-cells property
above, the fourth and fifth cells are for the physical CPU address to
translate to and the sixth and seventh cells are as described for the
#size-cells property above.
- Entries setup the mapping for the standard I/O, memory and
prefetchable PCI regions. The first cell determines the type of region
that is setup:
- 0x81000000: I/O memory region
- 0x82000000: non-prefetchable memory region
- 0xc2000000: prefetchable memory region
Please refer to the standard PCI bus binding document for a more detailed
explanation.
- #interrupt-cells: Size representation for interrupts (must be 1)
- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
Please refer to the standard PCI bus binding document for a more detailed
explanation.
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- core
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- apb
- core
- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
- phy-names: Must include an entry for each active lane.
"p2u-N": where N ranges from 0 to one less than the total number of lanes
- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed
by controller-id. Following are the controller ids for each controller.
0: C0
1: C1
2: C2
3: C3
4: C4
5: C5
- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
Optional properties:
- pinctrl-names: A list of pinctrl state names.
It is mandatory for C5 controller and optional for other controllers.
- "default": Configures PCIe I/O for proper operation.
- pinctrl-0: phandle for the 'default' state of pin configuration.
It is mandatory for C5 controller and optional for other controllers.
- supports-clkreq: Refer to Documentation/devicetree/bindings/pci/pci.txt
- nvidia,update-fc-fixup: This is a boolean property and needs to be present to
improve performance when a platform is designed in such a way that it
satisfies at least one of the following conditions thereby enabling root
port to exchange optimum number of FC (Flow Control) credits with
downstream devices
1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
a) speed is Gen-2 and MPS is 256B
b) speed is >= Gen-3 with any MPS
- nvidia,aspm-cmrt-us: Common Mode Restore Time for proper operation of ASPM
to be specified in microseconds
- nvidia,aspm-pwr-on-t-us: Power On time for proper operation of ASPM to be
specified in microseconds
- nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be
specified in microseconds
- vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
in p2972-0000 platform).
- vpcie12v-supply: A phandle to the regulator node that supplies 12V to the slot
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
in p2972-0000 platform).
Examples:
=========
Tegra194:
--------
pcie@14180000 {
compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
reg = <0x00 0x14180000 0x0 0x00020000 /* appl registers (128K) */
0x00 0x38000000 0x0 0x00040000 /* configuration space (256K) */
0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K) */
reg-names = "appl", "config", "atu_dma";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
num-lanes = <8>;
linux,pci-domain = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>;
clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
clock-names = "core";
resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
<&bpmp TEGRA194_RESET_PEX0_CORE_0>;
reset-names = "apb", "core";
interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */
<GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
interrupt-names = "intr", "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;
nvidia,bpmp = <&bpmp 0>;
supports-clkreq;
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */
0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000 /* non-prefetchable memory (30MB) */
0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>; /* prefetchable memory (16GB) */
vddio-pex-ctl-supply = <&vdd_1v8ao>;
vpcie3v3-supply = <&vdd_3v3_pcie>;
vpcie12v-supply = <&vdd_12v_pcie>;
phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>,
<&p2u_hsio_5>;
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3";
};
...@@ -11,7 +11,7 @@ Required properties: ...@@ -11,7 +11,7 @@ Required properties:
- reg-names: - reg-names:
- "ctrl" for the control register region - "ctrl" for the control register region
- "config" for the config space region - "config" for the config space region
- interrupts: Interrupt specifier for the PCIe controler - interrupts: Interrupt specifier for the PCIe controller
- clocks: reference to the PCIe controller clocks - clocks: reference to the PCIe controller clocks
- clock-names: mandatory if there is a second clock, in this case the - clock-names: mandatory if there is a second clock, in this case the
name must be "core" for the first clock and "reg" for the second name must be "core" for the first clock and "reg" for the second
......
...@@ -27,6 +27,11 @@ driver implementation may support the following properties: ...@@ -27,6 +27,11 @@ driver implementation may support the following properties:
- reset-gpios: - reset-gpios:
If present this property specifies PERST# GPIO. Host drivers can parse the If present this property specifies PERST# GPIO. Host drivers can parse the
GPIO and apply fundamental reset to endpoints. GPIO and apply fundamental reset to endpoints.
- supports-clkreq:
If present this property specifies that CLKREQ signal routing exists from
root port to downstream device and host bridge drivers can do programming
which depends on CLKREQ signal existence. For example, programming root port
not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal.
PCI-PCI Bridge properties PCI-PCI Bridge properties
------------------------- -------------------------
......
* Amazon Annapurna Labs PCIe host bridge
Amazon's Annapurna Labs PCIe Host Controller is based on the Synopsys DesignWare
PCI core. It inherits common properties defined in
Documentation/devicetree/bindings/pci/designware-pcie.txt.
Properties of the host controller node that differ from it are:
- compatible:
Usage: required
Value type: <stringlist>
Definition: Value should contain
- "amazon,al-alpine-v2-pcie" for alpine_v2
- "amazon,al-alpine-v3-pcie" for alpine_v3
- reg:
Usage: required
Value type: <prop-encoded-array>
Definition: Register ranges as listed in the reg-names property
- reg-names:
Usage: required
Value type: <stringlist>
Definition: Must include the following entries
- "config" PCIe ECAM space
- "controller" AL proprietary registers
- "dbi" Designware PCIe registers
Example:
pcie-external0: pcie@fb600000 {
compatible = "amazon,al-alpine-v3-pcie";
reg = <0x0 0xfb600000 0x0 0x00100000
0x0 0xfd800000 0x0 0x00010000
0x0 0xfd810000 0x0 0x00001000>;
reg-names = "config", "controller", "dbi";
bus-range = <0 255>;
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
interrupt-map-mask = <0x00 0 0 7>;
interrupt-map = <0x0000 0 0 1 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>; /* INTa */
ranges = <0x02000000 0x0 0xc0010000 0x0 0xc0010000 0x0 0x07ff0000>;
};
NVIDIA Tegra194 P2U binding
Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
Speed) each interfacing with 12 and 8 P2U instances respectively.
A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
lane.
Required properties:
- compatible: For Tegra19x, must contain "nvidia,tegra194-p2u".
- reg: Should be the physical address space and length of respective each P2U
instance.
- reg-names: Must include the entry "ctl".
Required properties for PHY port node:
- #phy-cells: Defined by generic PHY bindings. Must be 0.
Refer to phy/phy-bindings.txt for the generic PHY binding properties.
Example:
p2u_hsio_0: phy@3e10000 {
compatible = "nvidia,tegra194-p2u";
reg = <0x03e10000 0x10000>;
reg-names = "ctl";
#phy-cells = <0>;
};
...@@ -12580,16 +12580,18 @@ F: arch/x86/kernel/early-quirks.c ...@@ -12580,16 +12580,18 @@ F: arch/x86/kernel/early-quirks.c
PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
R: Andrew Murray <andrew.murray@arm.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
Q: http://patchwork.ozlabs.org/project/linux-pci/list/ Q: http://patchwork.ozlabs.org/project/linux-pci/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/
S: Supported S: Supported
F: drivers/pci/controller/ F: drivers/pci/controller/
PCIE DRIVER FOR ANNAPURNA LABS PCIE DRIVER FOR AMAZON ANNAPURNA LABS
M: Jonathan Chocron <jonnyc@amazon.com> M: Jonathan Chocron <jonnyc@amazon.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/pcie-al.txt
F: drivers/pci/controller/dwc/pcie-al.c F: drivers/pci/controller/dwc/pcie-al.c
PCIE DRIVER FOR AMLOGIC MESON PCIE DRIVER FOR AMLOGIC MESON
......
...@@ -874,7 +874,6 @@ ...@@ -874,7 +874,6 @@
#address-cells = <3>; #address-cells = <3>;
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
num-lanes = <4>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -899,7 +898,6 @@ ...@@ -899,7 +898,6 @@
#address-cells = <3>; #address-cells = <3>;
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
num-lanes = <4>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */
......
...@@ -486,7 +486,6 @@ ...@@ -486,7 +486,6 @@
#address-cells = <3>; #address-cells = <3>;
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
num-lanes = <4>;
num-viewport = <2>; num-viewport = <2>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */
......
...@@ -677,7 +677,6 @@ ...@@ -677,7 +677,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <4>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -704,7 +703,6 @@ ...@@ -704,7 +703,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <2>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -731,7 +729,6 @@ ...@@ -731,7 +729,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <2>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */
......
...@@ -649,7 +649,6 @@ ...@@ -649,7 +649,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <4>;
num-viewport = <8>; num-viewport = <8>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -671,7 +670,6 @@ ...@@ -671,7 +670,6 @@
reg-names = "regs", "addr_space"; reg-names = "regs", "addr_space";
num-ib-windows = <6>; num-ib-windows = <6>;
num-ob-windows = <8>; num-ob-windows = <8>;
num-lanes = <2>;
status = "disabled"; status = "disabled";
}; };
...@@ -687,7 +685,6 @@ ...@@ -687,7 +685,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <2>;
num-viewport = <8>; num-viewport = <8>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -709,7 +706,6 @@ ...@@ -709,7 +706,6 @@
reg-names = "regs", "addr_space"; reg-names = "regs", "addr_space";
num-ib-windows = <6>; num-ib-windows = <6>;
num-ob-windows = <8>; num-ob-windows = <8>;
num-lanes = <2>;
status = "disabled"; status = "disabled";
}; };
...@@ -725,7 +721,6 @@ ...@@ -725,7 +721,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <2>;
num-viewport = <8>; num-viewport = <8>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -747,7 +742,6 @@ ...@@ -747,7 +742,6 @@
reg-names = "regs", "addr_space"; reg-names = "regs", "addr_space";
num-ib-windows = <6>; num-ib-windows = <6>;
num-ob-windows = <8>; num-ob-windows = <8>;
num-lanes = <2>;
status = "disabled"; status = "disabled";
}; };
......
...@@ -469,7 +469,6 @@ ...@@ -469,7 +469,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <4>;
num-viewport = <256>; num-viewport = <256>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x20 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -495,7 +494,6 @@ ...@@ -495,7 +494,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <4>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x28 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x28 0x00010000 0x0 0x00010000 /* downstream I/O */
...@@ -521,7 +519,6 @@ ...@@ -521,7 +519,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <8>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x30 0x00010000 0x0 0x00010000 /* downstream I/O */ ranges = <0x81000000 0x0 0x00000000 0x30 0x00010000 0x0 0x00010000 /* downstream I/O */
......
...@@ -639,7 +639,6 @@ ...@@ -639,7 +639,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <4>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
msi-parent = <&its>; msi-parent = <&its>;
...@@ -661,7 +660,6 @@ ...@@ -661,7 +660,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <4>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
msi-parent = <&its>; msi-parent = <&its>;
...@@ -683,7 +681,6 @@ ...@@ -683,7 +681,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <8>;
num-viewport = <256>; num-viewport = <256>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
msi-parent = <&its>; msi-parent = <&its>;
...@@ -705,7 +702,6 @@ ...@@ -705,7 +702,6 @@
#size-cells = <2>; #size-cells = <2>;
device_type = "pci"; device_type = "pci";
dma-coherent; dma-coherent;
num-lanes = <4>;
num-viewport = <6>; num-viewport = <6>;
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
msi-parent = <&its>; msi-parent = <&its>;
......
...@@ -289,5 +289,29 @@ ...@@ -289,5 +289,29 @@
gpio = <&gpio TEGRA194_MAIN_GPIO(A, 3) GPIO_ACTIVE_HIGH>; gpio = <&gpio TEGRA194_MAIN_GPIO(A, 3) GPIO_ACTIVE_HIGH>;
enable-active-high; enable-active-high;
}; };
vdd_3v3_pcie: regulator@2 {
compatible = "regulator-fixed";
reg = <2>;
regulator-name = "PEX_3V3";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
gpio = <&gpio TEGRA194_MAIN_GPIO(Z, 2) GPIO_ACTIVE_HIGH>;
regulator-boot-on;
enable-active-high;
};
vdd_12v_pcie: regulator@3 {
compatible = "regulator-fixed";
reg = <3>;
regulator-name = "VDD_12V";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
gpio = <&gpio TEGRA194_MAIN_GPIO(A, 1) GPIO_ACTIVE_LOW>;
regulator-boot-on;
enable-active-low;
};
}; };
}; };
...@@ -93,9 +93,11 @@ ...@@ -93,9 +93,11 @@
}; };
pcie@141a0000 { pcie@141a0000 {
status = "disabled"; status = "okay";
vddio-pex-ctl-supply = <&vdd_1v8ao>; vddio-pex-ctl-supply = <&vdd_1v8ao>;
vpcie3v3-supply = <&vdd_3v3_pcie>;
vpcie12v-supply = <&vdd_12v_pcie>;
phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>, phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
<&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>, <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>,
......
...@@ -3,8 +3,9 @@ ...@@ -3,8 +3,9 @@
#include <dt-bindings/gpio/tegra194-gpio.h> #include <dt-bindings/gpio/tegra194-gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/mailbox/tegra186-hsp.h> #include <dt-bindings/mailbox/tegra186-hsp.h>
#include <dt-bindings/reset/tegra194-reset.h> #include <dt-bindings/pinctrl/pinctrl-tegra.h>
#include <dt-bindings/power/tegra194-powergate.h> #include <dt-bindings/power/tegra194-powergate.h>
#include <dt-bindings/reset/tegra194-reset.h>
#include <dt-bindings/thermal/tegra194-bpmp-thermal.h> #include <dt-bindings/thermal/tegra194-bpmp-thermal.h>
/ { / {
...@@ -130,6 +131,38 @@ ...@@ -130,6 +131,38 @@
}; };
}; };
pinmux: pinmux@2430000 {
compatible = "nvidia,tegra194-pinmux";
reg = <0x2430000 0x17000
0xc300000 0x4000>;
status = "okay";
pex_rst_c5_out_state: pex_rst_c5_out {
pex_rst {
nvidia,pins = "pex_l5_rst_n_pgg1";
nvidia,schmitt = <TEGRA_PIN_DISABLE>;
nvidia,lpdr = <TEGRA_PIN_ENABLE>;
nvidia,enable-input = <TEGRA_PIN_DISABLE>;
nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
nvidia,tristate = <TEGRA_PIN_DISABLE>;
nvidia,pull = <TEGRA_PIN_PULL_NONE>;
};
};
clkreq_c5_bi_dir_state: clkreq_c5_bi_dir {
clkreq {
nvidia,pins = "pex_l5_clkreq_n_pgg0";
nvidia,schmitt = <TEGRA_PIN_DISABLE>;
nvidia,lpdr = <TEGRA_PIN_ENABLE>;
nvidia,enable-input = <TEGRA_PIN_ENABLE>;
nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
nvidia,tristate = <TEGRA_PIN_DISABLE>;
nvidia,pull = <TEGRA_PIN_PULL_NONE>;
};
};
};
uarta: serial@3100000 { uarta: serial@3100000 {
compatible = "nvidia,tegra194-uart", "nvidia,tegra20-uart"; compatible = "nvidia,tegra194-uart", "nvidia,tegra20-uart";
reg = <0x03100000 0x40>; reg = <0x03100000 0x40>;
...@@ -1365,6 +1398,9 @@ ...@@ -1365,6 +1398,9 @@
num-viewport = <8>; num-viewport = <8>;
linux,pci-domain = <5>; linux,pci-domain = <5>;
pinctrl-names = "default";
pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>;
clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>, clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>,
<&bpmp TEGRA194_CLK_PEX1_CORE_5M>; <&bpmp TEGRA194_CLK_PEX1_CORE_5M>;
clock-names = "core", "core_m"; clock-names = "core", "core_m";
......
...@@ -66,8 +66,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file, ...@@ -66,8 +66,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file,
unsigned long size, unsigned long size,
pgprot_t prot); pgprot_t prot);
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
/* This part of code was originally in xilinx-pci.h */ /* This part of code was originally in xilinx-pci.h */
#ifdef CONFIG_PCI_XILINX #ifdef CONFIG_PCI_XILINX
extern void __init xilinx_pci_init(void); extern void __init xilinx_pci_init(void);
......
...@@ -108,7 +108,6 @@ extern unsigned long PCIBIOS_MIN_MEM; ...@@ -108,7 +108,6 @@ extern unsigned long PCIBIOS_MIN_MEM;
#define HAVE_PCI_MMAP #define HAVE_PCI_MMAP
#define ARCH_GENERIC_PCI_MMAP_RESOURCE #define ARCH_GENERIC_PCI_MMAP_RESOURCE
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
/* /*
* Dynamic DMA mapping stuff. * Dynamic DMA mapping stuff.
......
...@@ -112,8 +112,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file, ...@@ -112,8 +112,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file,
unsigned long size, unsigned long size,
pgprot_t prot); pgprot_t prot);
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose); extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose);
extern void pcibios_setup_bus_devices(struct pci_bus *bus); extern void pcibios_setup_bus_devices(struct pci_bus *bus);
extern void pcibios_setup_bus_self(struct pci_bus *bus); extern void pcibios_setup_bus_self(struct pci_bus *bus);
......
...@@ -38,8 +38,6 @@ static inline int pci_proc_domain(struct pci_bus *bus) ...@@ -38,8 +38,6 @@ static inline int pci_proc_domain(struct pci_bus *bus)
#define arch_can_pci_mmap_io() 1 #define arch_can_pci_mmap_io() 1
#define HAVE_ARCH_PCI_GET_UNMAPPED_AREA #define HAVE_ARCH_PCI_GET_UNMAPPED_AREA
#define get_pci_unmapped_area get_fb_unmapped_area #define get_pci_unmapped_area get_fb_unmapped_area
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
#endif /* CONFIG_SPARC64 */ #endif /* CONFIG_SPARC64 */
#if defined(CONFIG_SPARC64) || defined(CONFIG_LEON_PCI) #if defined(CONFIG_SPARC64) || defined(CONFIG_LEON_PCI)
......
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-acpi.h> #include <linux/pci-acpi.h>
#include <linux/pci-aspm.h>
#include <linux/dmar.h> #include <linux/dmar.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/slab.h> #include <linux/slab.h>
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-aspm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include "xillybus.h" #include "xillybus.h"
......
...@@ -583,8 +583,10 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, ...@@ -583,8 +583,10 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
break; break;
} }
/* P2PDMA contexts do not need to be unmapped */ if (is_pci_p2pdma_page(sg_page(sg)))
if (!is_pci_p2pdma_page(sg_page(sg))) pci_p2pdma_unmap_sg(qp->pd->device->dma_device, sg,
sg_cnt, dir);
else
ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
} }
EXPORT_SYMBOL(rdma_rw_ctx_destroy); EXPORT_SYMBOL(rdma_rw_ctx_destroy);
......
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <linux/io.h> #include <linux/io.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-aspm.h>
#include <linux/crc32.h> #include <linux/crc32.h>
#include <linux/if_vlan.h> #include <linux/if_vlan.h>
#include <linux/timecounter.h> #include <linux/timecounter.h>
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-aspm.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/ethtool.h> #include <linux/ethtool.h>
......
...@@ -28,7 +28,6 @@ ...@@ -28,7 +28,6 @@
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/prefetch.h> #include <linux/prefetch.h>
#include <linux/pci-aspm.h>
#include <linux/ipv6.h> #include <linux/ipv6.h>
#include <net/ip6_checksum.h> #include <net/ip6_checksum.h>
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <linux/nl80211.h> #include <linux/nl80211.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-aspm.h>
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/module.h> #include <linux/module.h>
#include "../ath.h" #include "../ath.h"
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-aspm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/delay.h> #include <linux/delay.h>
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-aspm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/delay.h> #include <linux/delay.h>
......
...@@ -62,7 +62,6 @@ ...@@ -62,7 +62,6 @@
* *
*****************************************************************************/ *****************************************************************************/
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-aspm.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/sched.h> #include <linux/sched.h>
......
...@@ -549,8 +549,10 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) ...@@ -549,8 +549,10 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
WARN_ON_ONCE(!iod->nents); WARN_ON_ONCE(!iod->nents);
/* P2PDMA requests do not need to be unmapped */ if (is_pci_p2pdma_page(sg_page(iod->sg)))
if (!is_pci_p2pdma_page(sg_page(iod->sg))) pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents,
rq_dma_dir(req));
else
dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req));
...@@ -834,8 +836,8 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, ...@@ -834,8 +836,8 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
goto out; goto out;
if (is_pci_p2pdma_page(sg_page(iod->sg))) if (is_pci_p2pdma_page(sg_page(iod->sg)))
nr_mapped = pci_p2pdma_map_sg(dev->dev, iod->sg, iod->nents, nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg,
rq_dma_dir(req)); iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN);
else else
nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
rq_dma_dir(req), DMA_ATTR_NO_WARN); rq_dma_dir(req), DMA_ATTR_NO_WARN);
......
...@@ -52,7 +52,7 @@ config PCI_MSI ...@@ -52,7 +52,7 @@ config PCI_MSI
If you don't know what to do here, say Y. If you don't know what to do here, say Y.
config PCI_MSI_IRQ_DOMAIN config PCI_MSI_IRQ_DOMAIN
def_bool ARC || ARM || ARM64 || X86 def_bool ARC || ARM || ARM64 || X86 || RISCV
depends on PCI_MSI depends on PCI_MSI
select GENERIC_MSI_IRQ_DOMAIN select GENERIC_MSI_IRQ_DOMAIN
...@@ -170,7 +170,7 @@ config PCI_P2PDMA ...@@ -170,7 +170,7 @@ config PCI_P2PDMA
Many PCIe root complexes do not support P2P transactions and Many PCIe root complexes do not support P2P transactions and
it's hard to tell which support it at all, so at this time, it's hard to tell which support it at all, so at this time,
P2P DMA transations must be between devices behind the same root P2P DMA transactions must be between devices behind the same root
port. port.
If unsure, say N. If unsure, say N.
...@@ -181,7 +181,7 @@ config PCI_LABEL ...@@ -181,7 +181,7 @@ config PCI_LABEL
config PCI_HYPERV config PCI_HYPERV
tristate "Hyper-V PCI Frontend" tristate "Hyper-V PCI Frontend"
depends on X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64 depends on X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
select PCI_HYPERV_INTERFACE select PCI_HYPERV_INTERFACE
help help
The PCI device frontend driver allows the kernel to import arbitrary The PCI device frontend driver allows the kernel to import arbitrary
......
...@@ -336,15 +336,6 @@ static inline int pcie_cap_version(const struct pci_dev *dev) ...@@ -336,15 +336,6 @@ static inline int pcie_cap_version(const struct pci_dev *dev)
return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS; return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS;
} }
static bool pcie_downstream_port(const struct pci_dev *dev)
{
int type = pci_pcie_type(dev);
return type == PCI_EXP_TYPE_ROOT_PORT ||
type == PCI_EXP_TYPE_DOWNSTREAM ||
type == PCI_EXP_TYPE_PCIE_BRIDGE;
}
bool pcie_cap_has_lnkctl(const struct pci_dev *dev) bool pcie_cap_has_lnkctl(const struct pci_dev *dev)
{ {
int type = pci_pcie_type(dev); int type = pci_pcie_type(dev);
......
...@@ -417,11 +417,9 @@ struct pci_bus *pci_bus_get(struct pci_bus *bus) ...@@ -417,11 +417,9 @@ struct pci_bus *pci_bus_get(struct pci_bus *bus)
get_device(&bus->dev); get_device(&bus->dev);
return bus; return bus;
} }
EXPORT_SYMBOL(pci_bus_get);
void pci_bus_put(struct pci_bus *bus) void pci_bus_put(struct pci_bus *bus)
{ {
if (bus) if (bus)
put_device(&bus->dev); put_device(&bus->dev);
} }
EXPORT_SYMBOL(pci_bus_put);
...@@ -131,13 +131,29 @@ config PCI_KEYSTONE_EP ...@@ -131,13 +131,29 @@ config PCI_KEYSTONE_EP
DesignWare core functions to implement the driver. DesignWare core functions to implement the driver.
config PCI_LAYERSCAPE config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller" bool "Freescale Layerscape PCIe controller - Host mode"
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN depends on PCI_MSI_IRQ_DOMAIN
select MFD_SYSCON select MFD_SYSCON
select PCIE_DW_HOST select PCIE_DW_HOST
help help
Say Y here if you want PCIe controller support on Layerscape SoCs. Say Y here if you want to enable PCIe controller support on Layerscape
SoCs to work in Host mode.
This controller can work either as EP or RC. The RCW[HOST_AGT_PEX]
determines which PCIe controller works in EP mode and which PCIe
controller works in RC mode.
config PCI_LAYERSCAPE_EP
bool "Freescale Layerscape PCIe controller - Endpoint mode"
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
depends on PCI_ENDPOINT
select PCIE_DW_EP
help
Say Y here if you want to enable PCIe controller support on Layerscape
SoCs to work in Endpoint mode.
This controller can work either as EP or RC. The RCW[HOST_AGT_PEX]
determines which PCIe controller works in EP mode and which PCIe
controller works in RC mode.
config PCI_HISI config PCI_HISI
depends on OF && (ARM64 || COMPILE_TEST) depends on OF && (ARM64 || COMPILE_TEST)
...@@ -220,6 +236,16 @@ config PCI_MESON ...@@ -220,6 +236,16 @@ config PCI_MESON
and therefore the driver re-uses the DesignWare core functions to and therefore the driver re-uses the DesignWare core functions to
implement the driver. implement the driver.
config PCIE_TEGRA194
tristate "NVIDIA Tegra194 (and later) PCIe controller"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
select PHY_TEGRA194_P2U
help
Say Y here if you want support for DesignWare core based PCIe host
controller found in NVIDIA Tegra194 SoC.
config PCIE_UNIPHIER config PCIE_UNIPHIER
bool "Socionext UniPhier PCIe controllers" bool "Socionext UniPhier PCIe controllers"
depends on ARCH_UNIPHIER || COMPILE_TEST depends on ARCH_UNIPHIER || COMPILE_TEST
...@@ -230,4 +256,16 @@ config PCIE_UNIPHIER ...@@ -230,4 +256,16 @@ config PCIE_UNIPHIER
Say Y here if you want PCIe controller support on UniPhier SoCs. Say Y here if you want PCIe controller support on UniPhier SoCs.
This driver supports LD20 and PXs3 SoCs. This driver supports LD20 and PXs3 SoCs.
config PCIE_AL
bool "Amazon Annapurna Labs PCIe controller"
depends on OF && (ARM64 || COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
help
Say Y here to enable support of the Amazon's Annapurna Labs PCIe
controller IP on Amazon SoCs. The PCIe controller uses the DesignWare
core plus Annapurna Labs proprietary hardware wrappers. This is
required only for DT-based platforms. ACPI platforms with the
Annapurna Labs PCIe controller don't need to enable this.
endmenu endmenu
...@@ -8,13 +8,15 @@ obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o ...@@ -8,13 +8,15 @@ obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o pci-layerscape-ep.o obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o
obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o
obj-$(CONFIG_PCI_MESON) += pci-meson.o obj-$(CONFIG_PCI_MESON) += pci-meson.o
obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
# The following drivers are for devices that use the generic ACPI # The following drivers are for devices that use the generic ACPI
......
...@@ -465,7 +465,7 @@ static int __init exynos_pcie_probe(struct platform_device *pdev) ...@@ -465,7 +465,7 @@ static int __init exynos_pcie_probe(struct platform_device *pdev)
ep->phy = devm_of_phy_get(dev, np, NULL); ep->phy = devm_of_phy_get(dev, np, NULL);
if (IS_ERR(ep->phy)) { if (IS_ERR(ep->phy)) {
if (PTR_ERR(ep->phy) == -EPROBE_DEFER) if (PTR_ERR(ep->phy) != -ENODEV)
return PTR_ERR(ep->phy); return PTR_ERR(ep->phy);
ep->phy = NULL; ep->phy = NULL;
......
...@@ -57,6 +57,7 @@ enum imx6_pcie_variants { ...@@ -57,6 +57,7 @@ enum imx6_pcie_variants {
struct imx6_pcie_drvdata { struct imx6_pcie_drvdata {
enum imx6_pcie_variants variant; enum imx6_pcie_variants variant;
u32 flags; u32 flags;
int dbi_length;
}; };
struct imx6_pcie { struct imx6_pcie {
...@@ -1173,8 +1174,8 @@ static int imx6_pcie_probe(struct platform_device *pdev) ...@@ -1173,8 +1174,8 @@ static int imx6_pcie_probe(struct platform_device *pdev)
imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie");
if (IS_ERR(imx6_pcie->vpcie)) { if (IS_ERR(imx6_pcie->vpcie)) {
if (PTR_ERR(imx6_pcie->vpcie) == -EPROBE_DEFER) if (PTR_ERR(imx6_pcie->vpcie) != -ENODEV)
return -EPROBE_DEFER; return PTR_ERR(imx6_pcie->vpcie);
imx6_pcie->vpcie = NULL; imx6_pcie->vpcie = NULL;
} }
...@@ -1212,6 +1213,7 @@ static const struct imx6_pcie_drvdata drvdata[] = { ...@@ -1212,6 +1213,7 @@ static const struct imx6_pcie_drvdata drvdata[] = {
.variant = IMX6Q, .variant = IMX6Q,
.flags = IMX6_PCIE_FLAG_IMX6_PHY | .flags = IMX6_PCIE_FLAG_IMX6_PHY |
IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE,
.dbi_length = 0x200,
}, },
[IMX6SX] = { [IMX6SX] = {
.variant = IMX6SX, .variant = IMX6SX,
...@@ -1254,6 +1256,37 @@ static struct platform_driver imx6_pcie_driver = { ...@@ -1254,6 +1256,37 @@ static struct platform_driver imx6_pcie_driver = {
.shutdown = imx6_pcie_shutdown, .shutdown = imx6_pcie_shutdown,
}; };
static void imx6_pcie_quirk(struct pci_dev *dev)
{
struct pci_bus *bus = dev->bus;
struct pcie_port *pp = bus->sysdata;
/* Bus parent is the PCI bridge, its parent is this platform driver */
if (!bus->dev.parent || !bus->dev.parent->parent)
return;
/* Make sure we only quirk devices associated with this driver */
if (bus->dev.parent->parent->driver != &imx6_pcie_driver.driver)
return;
if (bus->number == pp->root_bus_nr) {
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci);
/*
* Limit config length to avoid the kernel reading beyond
* the register set and causing an abort on i.MX 6Quad
*/
if (imx6_pcie->drvdata->dbi_length) {
dev->cfg_size = imx6_pcie->drvdata->dbi_length;
dev_info(&dev->dev, "Limiting cfg_size to %d\n",
dev->cfg_size);
}
}
}
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_SYNOPSYS, 0xabcd,
PCI_CLASS_BRIDGE_PCI, 8, imx6_pcie_quirk);
static int __init imx6_pcie_init(void) static int __init imx6_pcie_init(void)
{ {
#ifdef CONFIG_ARM #ifdef CONFIG_ARM
......
...@@ -44,6 +44,7 @@ static const struct pci_epc_features ls_pcie_epc_features = { ...@@ -44,6 +44,7 @@ static const struct pci_epc_features ls_pcie_epc_features = {
.linkup_notifier = false, .linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = false, .msix_capable = false,
.bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4),
}; };
static const struct pci_epc_features* static const struct pci_epc_features*
......
...@@ -91,3 +91,368 @@ struct pci_ecam_ops al_pcie_ops = { ...@@ -91,3 +91,368 @@ struct pci_ecam_ops al_pcie_ops = {
}; };
#endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */ #endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */
#ifdef CONFIG_PCIE_AL
#include <linux/of_pci.h>
#include "pcie-designware.h"
#define AL_PCIE_REV_ID_2 2
#define AL_PCIE_REV_ID_3 3
#define AL_PCIE_REV_ID_4 4
#define AXI_BASE_OFFSET 0x0
#define DEVICE_ID_OFFSET 0x16c
#define DEVICE_REV_ID 0x0
#define DEVICE_REV_ID_DEV_ID_MASK GENMASK(31, 16)
#define DEVICE_REV_ID_DEV_ID_X4 0
#define DEVICE_REV_ID_DEV_ID_X8 2
#define DEVICE_REV_ID_DEV_ID_X16 4
#define OB_CTRL_REV1_2_OFFSET 0x0040
#define OB_CTRL_REV3_5_OFFSET 0x0030
#define CFG_TARGET_BUS 0x0
#define CFG_TARGET_BUS_MASK_MASK GENMASK(7, 0)
#define CFG_TARGET_BUS_BUSNUM_MASK GENMASK(15, 8)
#define CFG_CONTROL 0x4
#define CFG_CONTROL_SUBBUS_MASK GENMASK(15, 8)
#define CFG_CONTROL_SEC_BUS_MASK GENMASK(23, 16)
struct al_pcie_reg_offsets {
unsigned int ob_ctrl;
};
struct al_pcie_target_bus_cfg {
u8 reg_val;
u8 reg_mask;
u8 ecam_mask;
};
struct al_pcie {
struct dw_pcie *pci;
void __iomem *controller_base; /* base of PCIe unit (not DW core) */
struct device *dev;
resource_size_t ecam_size;
unsigned int controller_rev_id;
struct al_pcie_reg_offsets reg_offsets;
struct al_pcie_target_bus_cfg target_bus_cfg;
};
#define PCIE_ECAM_DEVFN(x) (((x) & 0xff) << 12)
#define to_al_pcie(x) dev_get_drvdata((x)->dev)
static inline u32 al_pcie_controller_readl(struct al_pcie *pcie, u32 offset)
{
return readl_relaxed(pcie->controller_base + offset);
}
static inline void al_pcie_controller_writel(struct al_pcie *pcie, u32 offset,
u32 val)
{
writel_relaxed(val, pcie->controller_base + offset);
}
static int al_pcie_rev_id_get(struct al_pcie *pcie, unsigned int *rev_id)
{
u32 dev_rev_id_val;
u32 dev_id_val;
dev_rev_id_val = al_pcie_controller_readl(pcie, AXI_BASE_OFFSET +
DEVICE_ID_OFFSET +
DEVICE_REV_ID);
dev_id_val = FIELD_GET(DEVICE_REV_ID_DEV_ID_MASK, dev_rev_id_val);
switch (dev_id_val) {
case DEVICE_REV_ID_DEV_ID_X4:
*rev_id = AL_PCIE_REV_ID_2;
break;
case DEVICE_REV_ID_DEV_ID_X8:
*rev_id = AL_PCIE_REV_ID_3;
break;
case DEVICE_REV_ID_DEV_ID_X16:
*rev_id = AL_PCIE_REV_ID_4;
break;
default:
dev_err(pcie->dev, "Unsupported dev_id_val (0x%x)\n",
dev_id_val);
return -EINVAL;
}
dev_dbg(pcie->dev, "dev_id_val: 0x%x\n", dev_id_val);
return 0;
}
static int al_pcie_reg_offsets_set(struct al_pcie *pcie)
{
switch (pcie->controller_rev_id) {
case AL_PCIE_REV_ID_2:
pcie->reg_offsets.ob_ctrl = OB_CTRL_REV1_2_OFFSET;
break;
case AL_PCIE_REV_ID_3:
case AL_PCIE_REV_ID_4:
pcie->reg_offsets.ob_ctrl = OB_CTRL_REV3_5_OFFSET;
break;
default:
dev_err(pcie->dev, "Unsupported controller rev_id: 0x%x\n",
pcie->controller_rev_id);
return -EINVAL;
}
return 0;
}
static inline void al_pcie_target_bus_set(struct al_pcie *pcie,
u8 target_bus,
u8 mask_target_bus)
{
u32 reg;
reg = FIELD_PREP(CFG_TARGET_BUS_MASK_MASK, mask_target_bus) |
FIELD_PREP(CFG_TARGET_BUS_BUSNUM_MASK, target_bus);
al_pcie_controller_writel(pcie, AXI_BASE_OFFSET +
pcie->reg_offsets.ob_ctrl + CFG_TARGET_BUS,
reg);
}
static void __iomem *al_pcie_conf_addr_map(struct al_pcie *pcie,
unsigned int busnr,
unsigned int devfn)
{
struct al_pcie_target_bus_cfg *target_bus_cfg = &pcie->target_bus_cfg;
unsigned int busnr_ecam = busnr & target_bus_cfg->ecam_mask;
unsigned int busnr_reg = busnr & target_bus_cfg->reg_mask;
struct pcie_port *pp = &pcie->pci->pp;
void __iomem *pci_base_addr;
pci_base_addr = (void __iomem *)((uintptr_t)pp->va_cfg0_base +
(busnr_ecam << 20) +
PCIE_ECAM_DEVFN(devfn));
if (busnr_reg != target_bus_cfg->reg_val) {
dev_dbg(pcie->pci->dev, "Changing target bus busnum val from 0x%x to 0x%x\n",
target_bus_cfg->reg_val, busnr_reg);
target_bus_cfg->reg_val = busnr_reg;
al_pcie_target_bus_set(pcie,
target_bus_cfg->reg_val,
target_bus_cfg->reg_mask);
}
return pci_base_addr;
}
static int al_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size,
u32 *val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct al_pcie *pcie = to_al_pcie(pci);
unsigned int busnr = bus->number;
void __iomem *pci_addr;
int rc;
pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn);
rc = dw_pcie_read(pci_addr + where, size, val);
dev_dbg(pci->dev, "%d-byte config read from %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n",
size, pci_domain_nr(bus), bus->number,
PCI_SLOT(devfn), PCI_FUNC(devfn), where,
(pci_addr + where), *val);
return rc;
}
static int al_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size,
u32 val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct al_pcie *pcie = to_al_pcie(pci);
unsigned int busnr = bus->number;
void __iomem *pci_addr;
int rc;
pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn);
rc = dw_pcie_write(pci_addr + where, size, val);
dev_dbg(pci->dev, "%d-byte config write to %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n",
size, pci_domain_nr(bus), bus->number,
PCI_SLOT(devfn), PCI_FUNC(devfn), where,
(pci_addr + where), val);
return rc;
}
static void al_pcie_config_prepare(struct al_pcie *pcie)
{
struct al_pcie_target_bus_cfg *target_bus_cfg;
struct pcie_port *pp = &pcie->pci->pp;
unsigned int ecam_bus_mask;
u32 cfg_control_offset;
u8 subordinate_bus;
u8 secondary_bus;
u32 cfg_control;
u32 reg;
target_bus_cfg = &pcie->target_bus_cfg;
ecam_bus_mask = (pcie->ecam_size >> 20) - 1;
if (ecam_bus_mask > 255) {
dev_warn(pcie->dev, "ECAM window size is larger than 256MB. Cutting off at 256\n");
ecam_bus_mask = 255;
}
/* This portion is taken from the transaction address */
target_bus_cfg->ecam_mask = ecam_bus_mask;
/* This portion is taken from the cfg_target_bus reg */
target_bus_cfg->reg_mask = ~target_bus_cfg->ecam_mask;
target_bus_cfg->reg_val = pp->busn->start & target_bus_cfg->reg_mask;
al_pcie_target_bus_set(pcie, target_bus_cfg->reg_val,
target_bus_cfg->reg_mask);
secondary_bus = pp->busn->start + 1;
subordinate_bus = pp->busn->end;
/* Set the valid values of secondary and subordinate buses */
cfg_control_offset = AXI_BASE_OFFSET + pcie->reg_offsets.ob_ctrl +
CFG_CONTROL;
cfg_control = al_pcie_controller_readl(pcie, cfg_control_offset);
reg = cfg_control &
~(CFG_CONTROL_SEC_BUS_MASK | CFG_CONTROL_SUBBUS_MASK);
reg |= FIELD_PREP(CFG_CONTROL_SUBBUS_MASK, subordinate_bus) |
FIELD_PREP(CFG_CONTROL_SEC_BUS_MASK, secondary_bus);
al_pcie_controller_writel(pcie, cfg_control_offset, reg);
}
static int al_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct al_pcie *pcie = to_al_pcie(pci);
int rc;
rc = al_pcie_rev_id_get(pcie, &pcie->controller_rev_id);
if (rc)
return rc;
rc = al_pcie_reg_offsets_set(pcie);
if (rc)
return rc;
al_pcie_config_prepare(pcie);
return 0;
}
static const struct dw_pcie_host_ops al_pcie_host_ops = {
.rd_other_conf = al_pcie_rd_other_conf,
.wr_other_conf = al_pcie_wr_other_conf,
.host_init = al_pcie_host_init,
};
static int al_add_pcie_port(struct pcie_port *pp,
struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
int ret;
pp->ops = &al_pcie_host_ops;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
}
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
};
static int al_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct resource *controller_res;
struct resource *ecam_res;
struct resource *dbi_res;
struct al_pcie *al_pcie;
struct dw_pcie *pci;
al_pcie = devm_kzalloc(dev, sizeof(*al_pcie), GFP_KERNEL);
if (!al_pcie)
return -ENOMEM;
pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
if (!pci)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
al_pcie->pci = pci;
al_pcie->dev = dev;
dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pci->dbi_base = devm_pci_remap_cfg_resource(dev, dbi_res);
if (IS_ERR(pci->dbi_base)) {
dev_err(dev, "couldn't remap dbi base %pR\n", dbi_res);
return PTR_ERR(pci->dbi_base);
}
ecam_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
if (!ecam_res) {
dev_err(dev, "couldn't find 'config' reg in DT\n");
return -ENOENT;
}
al_pcie->ecam_size = resource_size(ecam_res);
controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"controller");
al_pcie->controller_base = devm_ioremap_resource(dev, controller_res);
if (IS_ERR(al_pcie->controller_base)) {
dev_err(dev, "couldn't remap controller base %pR\n",
controller_res);
return PTR_ERR(al_pcie->controller_base);
}
dev_dbg(dev, "From DT: dbi_base: %pR, controller_base: %pR\n",
dbi_res, controller_res);
platform_set_drvdata(pdev, al_pcie);
return al_add_pcie_port(&pci->pp, pdev);
}
static const struct of_device_id al_pcie_of_match[] = {
{ .compatible = "amazon,al-alpine-v2-pcie",
},
{ .compatible = "amazon,al-alpine-v3-pcie",
},
{},
};
static struct platform_driver al_pcie_driver = {
.driver = {
.name = "al-pcie",
.of_match_table = al_pcie_of_match,
.suppress_bind_attrs = true,
},
.probe = al_pcie_probe,
};
builtin_platform_driver(al_pcie_driver);
#endif /* CONFIG_PCIE_AL*/
...@@ -118,11 +118,10 @@ static int armada8k_pcie_setup_phys(struct armada8k_pcie *pcie) ...@@ -118,11 +118,10 @@ static int armada8k_pcie_setup_phys(struct armada8k_pcie *pcie)
for (i = 0; i < ARMADA8K_PCIE_MAX_LANES; i++) { for (i = 0; i < ARMADA8K_PCIE_MAX_LANES; i++) {
pcie->phy[i] = devm_of_phy_get_by_index(dev, node, i); pcie->phy[i] = devm_of_phy_get_by_index(dev, node, i);
if (IS_ERR(pcie->phy[i]) &&
(PTR_ERR(pcie->phy[i]) == -EPROBE_DEFER))
return PTR_ERR(pcie->phy[i]);
if (IS_ERR(pcie->phy[i])) { if (IS_ERR(pcie->phy[i])) {
if (PTR_ERR(pcie->phy[i]) != -ENODEV)
return PTR_ERR(pcie->phy[i]);
pcie->phy[i] = NULL; pcie->phy[i] = NULL;
continue; continue;
} }
......
...@@ -40,39 +40,6 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) ...@@ -40,39 +40,6 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
__dw_pcie_ep_reset_bar(pci, bar, 0); __dw_pcie_ep_reset_bar(pci, bar, 0);
} }
static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
u8 cap)
{
u8 cap_id, next_cap_ptr;
u16 reg;
if (!cap_ptr)
return 0;
reg = dw_pcie_readw_dbi(pci, cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCI_CAP_ID_MAX)
return 0;
if (cap_id == cap)
return cap_ptr;
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
}
static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap)
{
u8 next_cap_ptr;
u16 reg;
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
next_cap_ptr = (reg & 0x00ff);
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
}
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no,
struct pci_epf_header *hdr) struct pci_epf_header *hdr)
{ {
...@@ -531,6 +498,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ...@@ -531,6 +498,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
int ret; int ret;
u32 reg; u32 reg;
void *addr; void *addr;
u8 hdr_type;
unsigned int nbars; unsigned int nbars;
unsigned int offset; unsigned int offset;
struct pci_epc *epc; struct pci_epc *epc;
...@@ -595,6 +563,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ...@@ -595,6 +563,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
if (ep->ops->ep_init) if (ep->ops->ep_init)
ep->ops->ep_init(ep); ep->ops->ep_init(ep);
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE);
if (hdr_type != PCI_HEADER_TYPE_NORMAL) {
dev_err(pci->dev, "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n",
hdr_type);
return -EIO;
}
ret = of_property_read_u8(np, "max-functions", &epc->max_functions); ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
if (ret < 0) if (ret < 0)
epc->max_functions = 1; epc->max_functions = 1;
...@@ -612,9 +587,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ...@@ -612,9 +587,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
return -ENOMEM; return -ENOMEM;
} }
ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI); ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX); ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (offset) { if (offset) {
......
...@@ -323,6 +323,7 @@ int dw_pcie_host_init(struct pcie_port *pp) ...@@ -323,6 +323,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
struct pci_bus *child; struct pci_bus *child;
struct pci_host_bridge *bridge; struct pci_host_bridge *bridge;
struct resource *cfg_res; struct resource *cfg_res;
u32 hdr_type;
int ret; int ret;
raw_spin_lock_init(&pci->pp.lock); raw_spin_lock_init(&pci->pp.lock);
...@@ -464,6 +465,21 @@ int dw_pcie_host_init(struct pcie_port *pp) ...@@ -464,6 +465,21 @@ int dw_pcie_host_init(struct pcie_port *pp)
goto err_free_msi; goto err_free_msi;
} }
ret = dw_pcie_rd_own_conf(pp, PCI_HEADER_TYPE, 1, &hdr_type);
if (ret != PCIBIOS_SUCCESSFUL) {
dev_err(pci->dev, "Failed reading PCI_HEADER_TYPE cfg space reg (ret: 0x%x)\n",
ret);
ret = pcibios_err_to_errno(ret);
goto err_free_msi;
}
if (hdr_type != PCI_HEADER_TYPE_BRIDGE) {
dev_err(pci->dev,
"PCIe controller is not set to bridge type (hdr_type: 0x%x)!\n",
hdr_type);
ret = -EIO;
goto err_free_msi;
}
pp->root_bus_nr = pp->busn->start; pp->root_bus_nr = pp->busn->start;
bridge->dev.parent = dev; bridge->dev.parent = dev;
...@@ -628,6 +644,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp) ...@@ -628,6 +644,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
u32 val, ctrl, num_ctrls; u32 val, ctrl, num_ctrls;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
/*
* Enable DBI read-only registers for writing/updating configuration.
* Write permission gets disabled towards the end of this function.
*/
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_setup(pci); dw_pcie_setup(pci);
if (!pp->ops->msi_host_init) { if (!pp->ops->msi_host_init) {
...@@ -650,12 +672,10 @@ void dw_pcie_setup_rc(struct pcie_port *pp) ...@@ -650,12 +672,10 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
/* Setup interrupt pins */ /* Setup interrupt pins */
dw_pcie_dbi_ro_wr_en(pci);
val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE); val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE);
val &= 0xffff00ff; val &= 0xffff00ff;
val |= 0x00000100; val |= 0x00000100;
dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val); dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val);
dw_pcie_dbi_ro_wr_dis(pci);
/* Setup bus numbers */ /* Setup bus numbers */
val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS); val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS);
...@@ -687,15 +707,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp) ...@@ -687,15 +707,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0);
/* Enable write permission for the DBI read-only register */
dw_pcie_dbi_ro_wr_en(pci);
/* Program correct class for RC */ /* Program correct class for RC */
dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
/* Better disable write permission right after the update */
dw_pcie_dbi_ro_wr_dis(pci);
dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val);
val |= PORT_LOGIC_SPEED_CHANGE; val |= PORT_LOGIC_SPEED_CHANGE;
dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val);
dw_pcie_dbi_ro_wr_dis(pci);
} }
EXPORT_SYMBOL_GPL(dw_pcie_setup_rc); EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
...@@ -14,6 +14,86 @@ ...@@ -14,6 +14,86 @@
#include "pcie-designware.h" #include "pcie-designware.h"
/*
* These interfaces resemble the pci_find_*capability() interfaces, but these
* are for configuring host controllers, which are bridges *to* PCI devices but
* are not PCI devices themselves.
*/
static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
u8 cap)
{
u8 cap_id, next_cap_ptr;
u16 reg;
if (!cap_ptr)
return 0;
reg = dw_pcie_readw_dbi(pci, cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCI_CAP_ID_MAX)
return 0;
if (cap_id == cap)
return cap_ptr;
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
}
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
{
u8 next_cap_ptr;
u16 reg;
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
next_cap_ptr = (reg & 0x00ff);
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
u8 cap)
{
u32 header;
int ttl;
int pos = PCI_CFG_SPACE_SIZE;
/* minimum 8 bytes per capability */
ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
if (start)
pos = start;
header = dw_pcie_readl_dbi(pci, pos);
/*
* If we have no capabilities, this is indicated by cap ID,
* cap version and next pointer all being 0.
*/
if (header == 0)
return 0;
while (ttl-- > 0) {
if (PCI_EXT_CAP_ID(header) == cap && pos != start)
return pos;
pos = PCI_EXT_CAP_NEXT(header);
if (pos < PCI_CFG_SPACE_SIZE)
break;
header = dw_pcie_readl_dbi(pci, pos);
}
return 0;
}
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
{
return dw_pcie_find_next_ext_capability(pci, 0, cap);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
int dw_pcie_read(void __iomem *addr, int size, u32 *val) int dw_pcie_read(void __iomem *addr, int size, u32 *val)
{ {
if (!IS_ALIGNED((uintptr_t)addr, size)) { if (!IS_ALIGNED((uintptr_t)addr, size)) {
...@@ -376,10 +456,11 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci) ...@@ -376,10 +456,11 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
} }
dev_err(pci->dev, "Phy link never came up\n"); dev_info(pci->dev, "Phy link never came up\n");
return -ETIMEDOUT; return -ETIMEDOUT;
} }
EXPORT_SYMBOL_GPL(dw_pcie_wait_for_link);
int dw_pcie_link_up(struct dw_pcie *pci) int dw_pcie_link_up(struct dw_pcie *pci)
{ {
...@@ -423,8 +504,10 @@ void dw_pcie_setup(struct dw_pcie *pci) ...@@ -423,8 +504,10 @@ void dw_pcie_setup(struct dw_pcie *pci)
ret = of_property_read_u32(np, "num-lanes", &lanes); ret = of_property_read_u32(np, "num-lanes", &lanes);
if (ret) if (ret) {
lanes = 0; dev_dbg(pci->dev, "property num-lanes isn't found\n");
return;
}
/* Set the number of lanes */ /* Set the number of lanes */
val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL); val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL);
...@@ -466,4 +549,11 @@ void dw_pcie_setup(struct dw_pcie *pci) ...@@ -466,4 +549,11 @@ void dw_pcie_setup(struct dw_pcie *pci)
break; break;
} }
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
if (of_property_read_bool(np, "snps,enable-cdm-check")) {
val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
PCIE_PL_CHK_REG_CHK_REG_START;
dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
}
} }
...@@ -86,6 +86,15 @@ ...@@ -86,6 +86,15 @@
#define PCIE_MISC_CONTROL_1_OFF 0x8BC #define PCIE_MISC_CONTROL_1_OFF 0x8BC
#define PCIE_DBI_RO_WR_EN BIT(0) #define PCIE_DBI_RO_WR_EN BIT(0)
#define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20
#define PCIE_PL_CHK_REG_CHK_REG_START BIT(0)
#define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1)
#define PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR BIT(16)
#define PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR BIT(17)
#define PCIE_PL_CHK_REG_CHK_REG_COMPLETE BIT(18)
#define PCIE_PL_CHK_REG_ERR_ADDR 0xB28
/* /*
* iATU Unroll-specific register definitions * iATU Unroll-specific register definitions
* From 4.80 core version the address translation will be made by unroll * From 4.80 core version the address translation will be made by unroll
...@@ -251,6 +260,9 @@ struct dw_pcie { ...@@ -251,6 +260,9 @@ struct dw_pcie {
#define to_dw_pcie_from_ep(endpoint) \ #define to_dw_pcie_from_ep(endpoint) \
container_of((endpoint), struct dw_pcie, ep) container_of((endpoint), struct dw_pcie, ep)
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
int dw_pcie_read(void __iomem *addr, int size, u32 *val); int dw_pcie_read(void __iomem *addr, int size, u32 *val);
int dw_pcie_write(void __iomem *addr, int size, u32 val); int dw_pcie_write(void __iomem *addr, int size, u32 val);
......
...@@ -340,8 +340,8 @@ static int histb_pcie_probe(struct platform_device *pdev) ...@@ -340,8 +340,8 @@ static int histb_pcie_probe(struct platform_device *pdev)
hipcie->vpcie = devm_regulator_get_optional(dev, "vpcie"); hipcie->vpcie = devm_regulator_get_optional(dev, "vpcie");
if (IS_ERR(hipcie->vpcie)) { if (IS_ERR(hipcie->vpcie)) {
if (PTR_ERR(hipcie->vpcie) == -EPROBE_DEFER) if (PTR_ERR(hipcie->vpcie) != -ENODEV)
return -EPROBE_DEFER; return PTR_ERR(hipcie->vpcie);
hipcie->vpcie = NULL; hipcie->vpcie = NULL;
} }
......
...@@ -436,7 +436,7 @@ static int kirin_pcie_host_init(struct pcie_port *pp) ...@@ -436,7 +436,7 @@ static int kirin_pcie_host_init(struct pcie_port *pp)
return 0; return 0;
} }
static struct dw_pcie_ops kirin_dw_pcie_ops = { static const struct dw_pcie_ops kirin_dw_pcie_ops = {
.read_dbi = kirin_pcie_read_dbi, .read_dbi = kirin_pcie_read_dbi,
.write_dbi = kirin_pcie_write_dbi, .write_dbi = kirin_pcie_write_dbi,
.link_up = kirin_pcie_link_up, .link_up = kirin_pcie_link_up,
......
此差异已折叠。
...@@ -43,9 +43,8 @@ static struct pci_config_window *gen_pci_init(struct device *dev, ...@@ -43,9 +43,8 @@ static struct pci_config_window *gen_pci_init(struct device *dev,
goto err_out; goto err_out;
} }
err = devm_add_action(dev, gen_pci_unmap_cfg, cfg); err = devm_add_action_or_reset(dev, gen_pci_unmap_cfg, cfg);
if (err) { if (err) {
gen_pci_unmap_cfg(cfg);
goto err_out; goto err_out;
} }
return cfg; return cfg;
......
...@@ -2809,6 +2809,48 @@ static void put_hvpcibus(struct hv_pcibus_device *hbus) ...@@ -2809,6 +2809,48 @@ static void put_hvpcibus(struct hv_pcibus_device *hbus)
complete(&hbus->remove_event); complete(&hbus->remove_event);
} }
#define HVPCI_DOM_MAP_SIZE (64 * 1024)
static DECLARE_BITMAP(hvpci_dom_map, HVPCI_DOM_MAP_SIZE);
/*
* PCI domain number 0 is used by emulated devices on Gen1 VMs, so define 0
* as invalid for passthrough PCI devices of this driver.
*/
#define HVPCI_DOM_INVALID 0
/**
* hv_get_dom_num() - Get a valid PCI domain number
* Check if the PCI domain number is in use, and return another number if
* it is in use.
*
* @dom: Requested domain number
*
* return: domain number on success, HVPCI_DOM_INVALID on failure
*/
static u16 hv_get_dom_num(u16 dom)
{
unsigned int i;
if (test_and_set_bit(dom, hvpci_dom_map) == 0)
return dom;
for_each_clear_bit(i, hvpci_dom_map, HVPCI_DOM_MAP_SIZE) {
if (test_and_set_bit(i, hvpci_dom_map) == 0)
return i;
}
return HVPCI_DOM_INVALID;
}
/**
* hv_put_dom_num() - Mark the PCI domain number as free
* @dom: Domain number to be freed
*/
static void hv_put_dom_num(u16 dom)
{
clear_bit(dom, hvpci_dom_map);
}
/** /**
* hv_pci_probe() - New VMBus channel probe, for a root PCI bus * hv_pci_probe() - New VMBus channel probe, for a root PCI bus
* @hdev: VMBus's tracking struct for this root PCI bus * @hdev: VMBus's tracking struct for this root PCI bus
...@@ -2820,6 +2862,7 @@ static int hv_pci_probe(struct hv_device *hdev, ...@@ -2820,6 +2862,7 @@ static int hv_pci_probe(struct hv_device *hdev,
const struct hv_vmbus_device_id *dev_id) const struct hv_vmbus_device_id *dev_id)
{ {
struct hv_pcibus_device *hbus; struct hv_pcibus_device *hbus;
u16 dom_req, dom;
char *name; char *name;
int ret; int ret;
...@@ -2835,19 +2878,34 @@ static int hv_pci_probe(struct hv_device *hdev, ...@@ -2835,19 +2878,34 @@ static int hv_pci_probe(struct hv_device *hdev,
hbus->state = hv_pcibus_init; hbus->state = hv_pcibus_init;
/* /*
* The PCI bus "domain" is what is called "segment" in ACPI and * The PCI bus "domain" is what is called "segment" in ACPI and other
* other specs. Pull it from the instance ID, to get something * specs. Pull it from the instance ID, to get something usually
* unique. Bytes 8 and 9 are what is used in Windows guests, so * unique. In rare cases of collision, we will find out another number
* do the same thing for consistency. Note that, since this code * not in use.
* only runs in a Hyper-V VM, Hyper-V can (and does) guarantee *
* that (1) the only domain in use for something that looks like * Note that, since this code only runs in a Hyper-V VM, Hyper-V
* a physical PCI bus (which is actually emulated by the * together with this guest driver can guarantee that (1) The only
* hypervisor) is domain 0 and (2) there will be no overlap * domain used by Gen1 VMs for something that looks like a physical
* between domains derived from these instance IDs in the same * PCI bus (which is actually emulated by the hypervisor) is domain 0.
* VM. * (2) There will be no overlap between domains (after fixing possible
* collisions) in the same VM.
*/ */
hbus->sysdata.domain = hdev->dev_instance.b[9] | dom_req = hdev->dev_instance.b[5] << 8 | hdev->dev_instance.b[4];
hdev->dev_instance.b[8] << 8; dom = hv_get_dom_num(dom_req);
if (dom == HVPCI_DOM_INVALID) {
dev_err(&hdev->device,
"Unable to use dom# 0x%hx or other numbers", dom_req);
ret = -EINVAL;
goto free_bus;
}
if (dom != dom_req)
dev_info(&hdev->device,
"PCI dom# 0x%hx has collision, using 0x%hx",
dom_req, dom);
hbus->sysdata.domain = dom;
hbus->hdev = hdev; hbus->hdev = hdev;
refcount_set(&hbus->remove_lock, 1); refcount_set(&hbus->remove_lock, 1);
...@@ -2862,7 +2920,7 @@ static int hv_pci_probe(struct hv_device *hdev, ...@@ -2862,7 +2920,7 @@ static int hv_pci_probe(struct hv_device *hdev,
hbus->sysdata.domain); hbus->sysdata.domain);
if (!hbus->wq) { if (!hbus->wq) {
ret = -ENOMEM; ret = -ENOMEM;
goto free_bus; goto free_dom;
} }
ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0, ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0,
...@@ -2946,6 +3004,8 @@ static int hv_pci_probe(struct hv_device *hdev, ...@@ -2946,6 +3004,8 @@ static int hv_pci_probe(struct hv_device *hdev,
vmbus_close(hdev->channel); vmbus_close(hdev->channel);
destroy_wq: destroy_wq:
destroy_workqueue(hbus->wq); destroy_workqueue(hbus->wq);
free_dom:
hv_put_dom_num(hbus->sysdata.domain);
free_bus: free_bus:
free_page((unsigned long)hbus); free_page((unsigned long)hbus);
return ret; return ret;
...@@ -3008,8 +3068,8 @@ static int hv_pci_remove(struct hv_device *hdev) ...@@ -3008,8 +3068,8 @@ static int hv_pci_remove(struct hv_device *hdev)
/* Remove the bus from PCI's point of view. */ /* Remove the bus from PCI's point of view. */
pci_lock_rescan_remove(); pci_lock_rescan_remove();
pci_stop_root_bus(hbus->pci_bus); pci_stop_root_bus(hbus->pci_bus);
pci_remove_root_bus(hbus->pci_bus);
hv_pci_remove_slots(hbus); hv_pci_remove_slots(hbus);
pci_remove_root_bus(hbus->pci_bus);
pci_unlock_rescan_remove(); pci_unlock_rescan_remove();
hbus->state = hv_pcibus_removed; hbus->state = hv_pcibus_removed;
} }
...@@ -3027,6 +3087,9 @@ static int hv_pci_remove(struct hv_device *hdev) ...@@ -3027,6 +3087,9 @@ static int hv_pci_remove(struct hv_device *hdev)
put_hvpcibus(hbus); put_hvpcibus(hbus);
wait_for_completion(&hbus->remove_event); wait_for_completion(&hbus->remove_event);
destroy_workqueue(hbus->wq); destroy_workqueue(hbus->wq);
hv_put_dom_num(hbus->sysdata.domain);
free_page((unsigned long)hbus); free_page((unsigned long)hbus);
return 0; return 0;
} }
...@@ -3058,6 +3121,9 @@ static void __exit exit_hv_pci_drv(void) ...@@ -3058,6 +3121,9 @@ static void __exit exit_hv_pci_drv(void)
static int __init init_hv_pci_drv(void) static int __init init_hv_pci_drv(void)
{ {
/* Set the invalid domain number's bit, so it will not be used */
set_bit(HVPCI_DOM_INVALID, hvpci_dom_map);
/* Initialize PCI block r/w interface */ /* Initialize PCI block r/w interface */
hvpci_block_ops.read_block = hv_read_config_block; hvpci_block_ops.read_block = hv_read_config_block;
hvpci_block_ops.write_block = hv_write_config_block; hvpci_block_ops.write_block = hv_write_config_block;
......
...@@ -2237,14 +2237,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) ...@@ -2237,14 +2237,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
err = of_pci_get_devfn(port); err = of_pci_get_devfn(port);
if (err < 0) { if (err < 0) {
dev_err(dev, "failed to parse address: %d\n", err); dev_err(dev, "failed to parse address: %d\n", err);
return err; goto err_node_put;
} }
index = PCI_SLOT(err); index = PCI_SLOT(err);
if (index < 1 || index > soc->num_ports) { if (index < 1 || index > soc->num_ports) {
dev_err(dev, "invalid port number: %d\n", index); dev_err(dev, "invalid port number: %d\n", index);
return -EINVAL; err = -EINVAL;
goto err_node_put;
} }
index--; index--;
...@@ -2253,12 +2254,13 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) ...@@ -2253,12 +2254,13 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
if (err < 0) { if (err < 0) {
dev_err(dev, "failed to parse # of lanes: %d\n", dev_err(dev, "failed to parse # of lanes: %d\n",
err); err);
return err; goto err_node_put;
} }
if (value > 16) { if (value > 16) {
dev_err(dev, "invalid # of lanes: %u\n", value); dev_err(dev, "invalid # of lanes: %u\n", value);
return -EINVAL; err = -EINVAL;
goto err_node_put;
} }
lanes |= value << (index << 3); lanes |= value << (index << 3);
...@@ -2272,13 +2274,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) ...@@ -2272,13 +2274,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
lane += value; lane += value;
rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL); rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL);
if (!rp) if (!rp) {
return -ENOMEM; err = -ENOMEM;
goto err_node_put;
}
err = of_address_to_resource(port, 0, &rp->regs); err = of_address_to_resource(port, 0, &rp->regs);
if (err < 0) { if (err < 0) {
dev_err(dev, "failed to parse address: %d\n", err); dev_err(dev, "failed to parse address: %d\n", err);
return err; goto err_node_put;
} }
INIT_LIST_HEAD(&rp->list); INIT_LIST_HEAD(&rp->list);
...@@ -2330,6 +2334,10 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) ...@@ -2330,6 +2334,10 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
return err; return err;
return 0; return 0;
err_node_put:
of_node_put(port);
return err;
} }
/* /*
......
...@@ -93,12 +93,9 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev) ...@@ -93,12 +93,9 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
pcie->need_ib_cfg = of_property_read_bool(np, "dma-ranges"); pcie->need_ib_cfg = of_property_read_bool(np, "dma-ranges");
/* PHY use is optional */ /* PHY use is optional */
pcie->phy = devm_phy_get(dev, "pcie-phy"); pcie->phy = devm_phy_optional_get(dev, "pcie-phy");
if (IS_ERR(pcie->phy)) { if (IS_ERR(pcie->phy))
if (PTR_ERR(pcie->phy) == -EPROBE_DEFER) return PTR_ERR(pcie->phy);
return -EPROBE_DEFER;
pcie->phy = NULL;
}
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources, ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources,
&iobase); &iobase);
......
...@@ -73,6 +73,7 @@ ...@@ -73,6 +73,7 @@
#define PCIE_MSI_VECTOR 0x0c0 #define PCIE_MSI_VECTOR 0x0c0
#define PCIE_CONF_VEND_ID 0x100 #define PCIE_CONF_VEND_ID 0x100
#define PCIE_CONF_DEVICE_ID 0x102
#define PCIE_CONF_CLASS_ID 0x106 #define PCIE_CONF_CLASS_ID 0x106
#define PCIE_INT_MASK 0x420 #define PCIE_INT_MASK 0x420
...@@ -141,12 +142,16 @@ struct mtk_pcie_port; ...@@ -141,12 +142,16 @@ struct mtk_pcie_port;
/** /**
* struct mtk_pcie_soc - differentiate between host generations * struct mtk_pcie_soc - differentiate between host generations
* @need_fix_class_id: whether this host's class ID needed to be fixed or not * @need_fix_class_id: whether this host's class ID needed to be fixed or not
* @need_fix_device_id: whether this host's device ID needed to be fixed or not
* @device_id: device ID which this host need to be fixed
* @ops: pointer to configuration access functions * @ops: pointer to configuration access functions
* @startup: pointer to controller setting functions * @startup: pointer to controller setting functions
* @setup_irq: pointer to initialize IRQ functions * @setup_irq: pointer to initialize IRQ functions
*/ */
struct mtk_pcie_soc { struct mtk_pcie_soc {
bool need_fix_class_id; bool need_fix_class_id;
bool need_fix_device_id;
unsigned int device_id;
struct pci_ops *ops; struct pci_ops *ops;
int (*startup)(struct mtk_pcie_port *port); int (*startup)(struct mtk_pcie_port *port);
int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node); int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node);
...@@ -630,8 +635,6 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc) ...@@ -630,8 +635,6 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc)
} }
chained_irq_exit(irqchip, desc); chained_irq_exit(irqchip, desc);
return;
} }
static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, static int mtk_pcie_setup_irq(struct mtk_pcie_port *port,
...@@ -696,6 +699,9 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) ...@@ -696,6 +699,9 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
writew(val, port->base + PCIE_CONF_CLASS_ID); writew(val, port->base + PCIE_CONF_CLASS_ID);
} }
if (soc->need_fix_device_id)
writew(soc->device_id, port->base + PCIE_CONF_DEVICE_ID);
/* 100ms timeout value should be enough for Gen1/2 training */ /* 100ms timeout value should be enough for Gen1/2 training */
err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val,
!!(val & PCIE_PORT_LINKUP_V2), 20, !!(val & PCIE_PORT_LINKUP_V2), 20,
...@@ -1216,11 +1222,21 @@ static const struct mtk_pcie_soc mtk_pcie_soc_mt7622 = { ...@@ -1216,11 +1222,21 @@ static const struct mtk_pcie_soc mtk_pcie_soc_mt7622 = {
.setup_irq = mtk_pcie_setup_irq, .setup_irq = mtk_pcie_setup_irq,
}; };
static const struct mtk_pcie_soc mtk_pcie_soc_mt7629 = {
.need_fix_class_id = true,
.need_fix_device_id = true,
.device_id = PCI_DEVICE_ID_MEDIATEK_7629,
.ops = &mtk_pcie_ops_v2,
.startup = mtk_pcie_startup_port_v2,
.setup_irq = mtk_pcie_setup_irq,
};
static const struct of_device_id mtk_pcie_ids[] = { static const struct of_device_id mtk_pcie_ids[] = {
{ .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 }, { .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 },
{ .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 }, { .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 },
{ .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 }, { .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 },
{ .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_mt7622 }, { .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_mt7622 },
{ .compatible = "mediatek,mt7629-pcie", .data = &mtk_pcie_soc_mt7629 },
{}, {},
}; };
......
...@@ -88,6 +88,7 @@ ...@@ -88,6 +88,7 @@
#define AMAP_CTRL_TYPE_MASK 3 #define AMAP_CTRL_TYPE_MASK 3
#define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win) #define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win)
#define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win)
#define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win) #define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win)
#define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win) #define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win)
#define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win) #define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win)
...@@ -462,7 +463,7 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie) ...@@ -462,7 +463,7 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
} }
static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
u64 pci_addr, u32 type, u64 size) u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
{ {
u32 value; u32 value;
u64 size64 = ~(size - 1); u64 size64 = ~(size - 1);
...@@ -482,7 +483,10 @@ static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, ...@@ -482,7 +483,10 @@ static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
csr_writel(pcie, upper_32_bits(size64), csr_writel(pcie, upper_32_bits(size64),
PAB_EXT_PEX_AMAP_SIZEN(win_num)); PAB_EXT_PEX_AMAP_SIZEN(win_num));
csr_writel(pcie, pci_addr, PAB_PEX_AMAP_AXI_WIN(win_num)); csr_writel(pcie, lower_32_bits(cpu_addr),
PAB_PEX_AMAP_AXI_WIN(win_num));
csr_writel(pcie, upper_32_bits(cpu_addr),
PAB_EXT_PEX_AMAP_AXI_WIN(win_num));
csr_writel(pcie, lower_32_bits(pci_addr), csr_writel(pcie, lower_32_bits(pci_addr),
PAB_PEX_AMAP_PEX_WIN_L(win_num)); PAB_PEX_AMAP_PEX_WIN_L(win_num));
...@@ -624,7 +628,7 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie) ...@@ -624,7 +628,7 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie)
CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res)); CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res));
/* memory inbound translation window */ /* memory inbound translation window */
program_ib_windows(pcie, WIN_NUM_0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); program_ib_windows(pcie, WIN_NUM_0, 0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE);
/* Get the I/O and memory ranges from DT */ /* Get the I/O and memory ranges from DT */
resource_list_for_each_entry(win, &pcie->resources) { resource_list_for_each_entry(win, &pcie->resources) {
......
...@@ -608,29 +608,29 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip) ...@@ -608,29 +608,29 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip)
rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v"); rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v");
if (IS_ERR(rockchip->vpcie12v)) { if (IS_ERR(rockchip->vpcie12v)) {
if (PTR_ERR(rockchip->vpcie12v) == -EPROBE_DEFER) if (PTR_ERR(rockchip->vpcie12v) != -ENODEV)
return -EPROBE_DEFER; return PTR_ERR(rockchip->vpcie12v);
dev_info(dev, "no vpcie12v regulator found\n"); dev_info(dev, "no vpcie12v regulator found\n");
} }
rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3");
if (IS_ERR(rockchip->vpcie3v3)) { if (IS_ERR(rockchip->vpcie3v3)) {
if (PTR_ERR(rockchip->vpcie3v3) == -EPROBE_DEFER) if (PTR_ERR(rockchip->vpcie3v3) != -ENODEV)
return -EPROBE_DEFER; return PTR_ERR(rockchip->vpcie3v3);
dev_info(dev, "no vpcie3v3 regulator found\n"); dev_info(dev, "no vpcie3v3 regulator found\n");
} }
rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8"); rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8");
if (IS_ERR(rockchip->vpcie1v8)) { if (IS_ERR(rockchip->vpcie1v8)) {
if (PTR_ERR(rockchip->vpcie1v8) == -EPROBE_DEFER) if (PTR_ERR(rockchip->vpcie1v8) != -ENODEV)
return -EPROBE_DEFER; return PTR_ERR(rockchip->vpcie1v8);
dev_info(dev, "no vpcie1v8 regulator found\n"); dev_info(dev, "no vpcie1v8 regulator found\n");
} }
rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9"); rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9");
if (IS_ERR(rockchip->vpcie0v9)) { if (IS_ERR(rockchip->vpcie0v9)) {
if (PTR_ERR(rockchip->vpcie0v9) == -EPROBE_DEFER) if (PTR_ERR(rockchip->vpcie0v9) != -ENODEV)
return -EPROBE_DEFER; return PTR_ERR(rockchip->vpcie0v9);
dev_info(dev, "no vpcie0v9 regulator found\n"); dev_info(dev, "no vpcie0v9 regulator found\n");
} }
......
...@@ -31,6 +31,9 @@ ...@@ -31,6 +31,9 @@
#define PCI_REG_VMLOCK 0x70 #define PCI_REG_VMLOCK 0x70
#define MB2_SHADOW_EN(vmlock) (vmlock & 0x2) #define MB2_SHADOW_EN(vmlock) (vmlock & 0x2)
#define MB2_SHADOW_OFFSET 0x2000
#define MB2_SHADOW_SIZE 16
enum vmd_features { enum vmd_features {
/* /*
* Device may contain registers which hint the physical location of the * Device may contain registers which hint the physical location of the
...@@ -94,6 +97,7 @@ struct vmd_dev { ...@@ -94,6 +97,7 @@ struct vmd_dev {
struct resource resources[3]; struct resource resources[3];
struct irq_domain *irq_domain; struct irq_domain *irq_domain;
struct pci_bus *bus; struct pci_bus *bus;
u8 busn_start;
struct dma_map_ops dma_ops; struct dma_map_ops dma_ops;
struct dma_domain dma_domain; struct dma_domain dma_domain;
...@@ -440,7 +444,8 @@ static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, ...@@ -440,7 +444,8 @@ static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
unsigned int devfn, int reg, int len) unsigned int devfn, int reg, int len)
{ {
char __iomem *addr = vmd->cfgbar + char __iomem *addr = vmd->cfgbar +
(bus->number << 20) + (devfn << 12) + reg; ((bus->number - vmd->busn_start) << 20) +
(devfn << 12) + reg;
if ((addr - vmd->cfgbar) + len >= if ((addr - vmd->cfgbar) + len >=
resource_size(&vmd->dev->resource[VMD_CFGBAR])) resource_size(&vmd->dev->resource[VMD_CFGBAR]))
...@@ -563,7 +568,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) ...@@ -563,7 +568,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
unsigned long flags; unsigned long flags;
LIST_HEAD(resources); LIST_HEAD(resources);
resource_size_t offset[2] = {0}; resource_size_t offset[2] = {0};
resource_size_t membar2_offset = 0x2000, busn_start = 0; resource_size_t membar2_offset = 0x2000;
struct pci_bus *child; struct pci_bus *child;
/* /*
...@@ -576,7 +581,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) ...@@ -576,7 +581,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
u32 vmlock; u32 vmlock;
int ret; int ret;
membar2_offset = 0x2018; membar2_offset = MB2_SHADOW_OFFSET + MB2_SHADOW_SIZE;
ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock); ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock);
if (ret || vmlock == ~0) if (ret || vmlock == ~0)
return -ENODEV; return -ENODEV;
...@@ -588,9 +593,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) ...@@ -588,9 +593,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
if (!membar2) if (!membar2)
return -ENOMEM; return -ENOMEM;
offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
readq(membar2 + 0x2008); readq(membar2 + MB2_SHADOW_OFFSET);
offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
readq(membar2 + 0x2010); readq(membar2 + MB2_SHADOW_OFFSET + 8);
pci_iounmap(vmd->dev, membar2); pci_iounmap(vmd->dev, membar2);
} }
} }
...@@ -606,14 +611,14 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) ...@@ -606,14 +611,14 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig); pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig);
if (BUS_RESTRICT_CAP(vmcap) && if (BUS_RESTRICT_CAP(vmcap) &&
(BUS_RESTRICT_CFG(vmconfig) == 0x1)) (BUS_RESTRICT_CFG(vmconfig) == 0x1))
busn_start = 128; vmd->busn_start = 128;
} }
res = &vmd->dev->resource[VMD_CFGBAR]; res = &vmd->dev->resource[VMD_CFGBAR];
vmd->resources[0] = (struct resource) { vmd->resources[0] = (struct resource) {
.name = "VMD CFGBAR", .name = "VMD CFGBAR",
.start = busn_start, .start = vmd->busn_start,
.end = busn_start + (resource_size(res) >> 20) - 1, .end = vmd->busn_start + (resource_size(res) >> 20) - 1,
.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
}; };
...@@ -681,8 +686,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) ...@@ -681,8 +686,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]); pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]); pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
vmd->bus = pci_create_root_bus(&vmd->dev->dev, busn_start, &vmd_ops, vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
sd, &resources); &vmd_ops, sd, &resources);
if (!vmd->bus) { if (!vmd->bus) {
pci_free_resource_list(&resources); pci_free_resource_list(&resources);
irq_domain_remove(vmd->irq_domain); irq_domain_remove(vmd->irq_domain);
......
...@@ -563,7 +563,6 @@ cleanup_slots(void) ...@@ -563,7 +563,6 @@ cleanup_slots(void)
} }
cleanup_null: cleanup_null:
up_write(&list_rwsem); up_write(&list_rwsem);
return;
} }
int int
......
...@@ -173,7 +173,6 @@ static void pci_print_IRQ_route(void) ...@@ -173,7 +173,6 @@ static void pci_print_IRQ_route(void)
dbg("%d %d %d %d\n", tbus, tdevice >> 3, tdevice & 0x7, tslot); dbg("%d %d %d %d\n", tbus, tdevice >> 3, tdevice & 0x7, tslot);
} }
return;
} }
......
...@@ -1872,8 +1872,6 @@ static void interrupt_event_handler(struct controller *ctrl) ...@@ -1872,8 +1872,6 @@ static void interrupt_event_handler(struct controller *ctrl)
} }
} /* End of FOR loop */ } /* End of FOR loop */
} }
return;
} }
...@@ -1943,8 +1941,6 @@ void cpqhp_pushbutton_thread(struct timer_list *t) ...@@ -1943,8 +1941,6 @@ void cpqhp_pushbutton_thread(struct timer_list *t)
p_slot->state = STATIC_STATE; p_slot->state = STATIC_STATE;
} }
return;
} }
......
...@@ -16,10 +16,7 @@ ...@@ -16,10 +16,7 @@
#ifndef CONFIG_HOTPLUG_PCI_COMPAQ_NVRAM #ifndef CONFIG_HOTPLUG_PCI_COMPAQ_NVRAM
static inline void compaq_nvram_init(void __iomem *rom_start) static inline void compaq_nvram_init(void __iomem *rom_start) { }
{
return;
}
static inline int compaq_nvram_load(void __iomem *rom_start, struct controller *ctrl) static inline int compaq_nvram_load(void __iomem *rom_start, struct controller *ctrl)
{ {
......
...@@ -1941,6 +1941,7 @@ static int __init update_bridge_ranges(struct bus_node **bus) ...@@ -1941,6 +1941,7 @@ static int __init update_bridge_ranges(struct bus_node **bus)
break; break;
case PCI_HEADER_TYPE_BRIDGE: case PCI_HEADER_TYPE_BRIDGE:
function = 0x8; function = 0x8;
/* fall through */
case PCI_HEADER_TYPE_MULTIBRIDGE: case PCI_HEADER_TYPE_MULTIBRIDGE:
/* We assume here that only 1 bus behind the bridge /* We assume here that only 1 bus behind the bridge
TO DO: add functionality for several: TO DO: add functionality for several:
......
...@@ -110,9 +110,9 @@ struct controller { ...@@ -110,9 +110,9 @@ struct controller {
* *
* @OFF_STATE: slot is powered off, no subordinate devices are enumerated * @OFF_STATE: slot is powered off, no subordinate devices are enumerated
* @BLINKINGON_STATE: slot will be powered on after the 5 second delay, * @BLINKINGON_STATE: slot will be powered on after the 5 second delay,
* green led is blinking * Power Indicator is blinking
* @BLINKINGOFF_STATE: slot will be powered off after the 5 second delay, * @BLINKINGOFF_STATE: slot will be powered off after the 5 second delay,
* green led is blinking * Power Indicator is blinking
* @POWERON_STATE: slot is currently powering on * @POWERON_STATE: slot is currently powering on
* @POWEROFF_STATE: slot is currently powering off * @POWEROFF_STATE: slot is currently powering off
* @ON_STATE: slot is powered on, subordinate devices have been enumerated * @ON_STATE: slot is powered on, subordinate devices have been enumerated
...@@ -167,12 +167,11 @@ int pciehp_power_on_slot(struct controller *ctrl); ...@@ -167,12 +167,11 @@ int pciehp_power_on_slot(struct controller *ctrl);
void pciehp_power_off_slot(struct controller *ctrl); void pciehp_power_off_slot(struct controller *ctrl);
void pciehp_get_power_status(struct controller *ctrl, u8 *status); void pciehp_get_power_status(struct controller *ctrl, u8 *status);
void pciehp_set_attention_status(struct controller *ctrl, u8 status); #define INDICATOR_NOOP -1 /* Leave indicator unchanged */
void pciehp_set_indicators(struct controller *ctrl, int pwr, int attn);
void pciehp_get_latch_status(struct controller *ctrl, u8 *status); void pciehp_get_latch_status(struct controller *ctrl, u8 *status);
int pciehp_query_power_fault(struct controller *ctrl); int pciehp_query_power_fault(struct controller *ctrl);
void pciehp_green_led_on(struct controller *ctrl);
void pciehp_green_led_off(struct controller *ctrl);
void pciehp_green_led_blink(struct controller *ctrl);
bool pciehp_card_present(struct controller *ctrl); bool pciehp_card_present(struct controller *ctrl);
bool pciehp_card_present_or_link_active(struct controller *ctrl); bool pciehp_card_present_or_link_active(struct controller *ctrl);
int pciehp_check_link_status(struct controller *ctrl); int pciehp_check_link_status(struct controller *ctrl);
......
...@@ -95,15 +95,20 @@ static void cleanup_slot(struct controller *ctrl) ...@@ -95,15 +95,20 @@ static void cleanup_slot(struct controller *ctrl)
} }
/* /*
* set_attention_status - Turns the Amber LED for a slot on, off or blink * set_attention_status - Turns the Attention Indicator on, off or blinking
*/ */
static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
{ {
struct controller *ctrl = to_ctrl(hotplug_slot); struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl->pcie->port; struct pci_dev *pdev = ctrl->pcie->port;
if (status)
status <<= PCI_EXP_SLTCTL_ATTN_IND_SHIFT;
else
status = PCI_EXP_SLTCTL_ATTN_IND_OFF;
pci_config_pm_runtime_get(pdev); pci_config_pm_runtime_get(pdev);
pciehp_set_attention_status(ctrl, status); pciehp_set_indicators(ctrl, INDICATOR_NOOP, status);
pci_config_pm_runtime_put(pdev); pci_config_pm_runtime_put(pdev);
return 0; return 0;
} }
......
...@@ -30,7 +30,10 @@ ...@@ -30,7 +30,10 @@
static void set_slot_off(struct controller *ctrl) static void set_slot_off(struct controller *ctrl)
{ {
/* turn off slot, turn on Amber LED, turn off Green LED if supported*/ /*
* Turn off slot, turn on attention indicator, turn off power
* indicator
*/
if (POWER_CTRL(ctrl)) { if (POWER_CTRL(ctrl)) {
pciehp_power_off_slot(ctrl); pciehp_power_off_slot(ctrl);
...@@ -42,8 +45,8 @@ static void set_slot_off(struct controller *ctrl) ...@@ -42,8 +45,8 @@ static void set_slot_off(struct controller *ctrl)
msleep(1000); msleep(1000);
} }
pciehp_green_led_off(ctrl); pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
pciehp_set_attention_status(ctrl, 1); PCI_EXP_SLTCTL_ATTN_IND_ON);
} }
/** /**
...@@ -65,7 +68,8 @@ static int board_added(struct controller *ctrl) ...@@ -65,7 +68,8 @@ static int board_added(struct controller *ctrl)
return retval; return retval;
} }
pciehp_green_led_blink(ctrl); pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK,
INDICATOR_NOOP);
/* Check link training status */ /* Check link training status */
retval = pciehp_check_link_status(ctrl); retval = pciehp_check_link_status(ctrl);
...@@ -90,8 +94,8 @@ static int board_added(struct controller *ctrl) ...@@ -90,8 +94,8 @@ static int board_added(struct controller *ctrl)
} }
} }
pciehp_green_led_on(ctrl); pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
pciehp_set_attention_status(ctrl, 0); PCI_EXP_SLTCTL_ATTN_IND_OFF);
return 0; return 0;
err_exit: err_exit:
...@@ -100,7 +104,7 @@ static int board_added(struct controller *ctrl) ...@@ -100,7 +104,7 @@ static int board_added(struct controller *ctrl)
} }
/** /**
* remove_board - Turns off slot and LEDs * remove_board - Turn off slot and Power Indicator
* @ctrl: PCIe hotplug controller where board is being removed * @ctrl: PCIe hotplug controller where board is being removed
* @safe_removal: whether the board is safely removed (versus surprise removed) * @safe_removal: whether the board is safely removed (versus surprise removed)
*/ */
...@@ -123,8 +127,8 @@ static void remove_board(struct controller *ctrl, bool safe_removal) ...@@ -123,8 +127,8 @@ static void remove_board(struct controller *ctrl, bool safe_removal)
&ctrl->pending_events); &ctrl->pending_events);
} }
/* turn off Green LED */ pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
pciehp_green_led_off(ctrl); INDICATOR_NOOP);
} }
static int pciehp_enable_slot(struct controller *ctrl); static int pciehp_enable_slot(struct controller *ctrl);
...@@ -171,9 +175,9 @@ void pciehp_handle_button_press(struct controller *ctrl) ...@@ -171,9 +175,9 @@ void pciehp_handle_button_press(struct controller *ctrl)
ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n",
slot_name(ctrl)); slot_name(ctrl));
} }
/* blink green LED and turn off amber */ /* blink power indicator and turn off attention */
pciehp_green_led_blink(ctrl); pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK,
pciehp_set_attention_status(ctrl, 0); PCI_EXP_SLTCTL_ATTN_IND_OFF);
schedule_delayed_work(&ctrl->button_work, 5 * HZ); schedule_delayed_work(&ctrl->button_work, 5 * HZ);
break; break;
case BLINKINGOFF_STATE: case BLINKINGOFF_STATE:
...@@ -187,12 +191,13 @@ void pciehp_handle_button_press(struct controller *ctrl) ...@@ -187,12 +191,13 @@ void pciehp_handle_button_press(struct controller *ctrl)
cancel_delayed_work(&ctrl->button_work); cancel_delayed_work(&ctrl->button_work);
if (ctrl->state == BLINKINGOFF_STATE) { if (ctrl->state == BLINKINGOFF_STATE) {
ctrl->state = ON_STATE; ctrl->state = ON_STATE;
pciehp_green_led_on(ctrl); pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
PCI_EXP_SLTCTL_ATTN_IND_OFF);
} else { } else {
ctrl->state = OFF_STATE; ctrl->state = OFF_STATE;
pciehp_green_led_off(ctrl); pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
PCI_EXP_SLTCTL_ATTN_IND_OFF);
} }
pciehp_set_attention_status(ctrl, 0);
ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n",
slot_name(ctrl)); slot_name(ctrl));
break; break;
...@@ -310,7 +315,9 @@ static int pciehp_enable_slot(struct controller *ctrl) ...@@ -310,7 +315,9 @@ static int pciehp_enable_slot(struct controller *ctrl)
pm_runtime_get_sync(&ctrl->pcie->port->dev); pm_runtime_get_sync(&ctrl->pcie->port->dev);
ret = __pciehp_enable_slot(ctrl); ret = __pciehp_enable_slot(ctrl);
if (ret && ATTN_BUTTN(ctrl)) if (ret && ATTN_BUTTN(ctrl))
pciehp_green_led_off(ctrl); /* may be blinking */ /* may be blinking */
pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
INDICATOR_NOOP);
pm_runtime_put(&ctrl->pcie->port->dev); pm_runtime_put(&ctrl->pcie->port->dev);
mutex_lock(&ctrl->state_lock); mutex_lock(&ctrl->state_lock);
......
...@@ -418,65 +418,40 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot, ...@@ -418,65 +418,40 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot,
return 0; return 0;
} }
void pciehp_set_attention_status(struct controller *ctrl, u8 value) /**
* pciehp_set_indicators() - set attention indicator, power indicator, or both
* @ctrl: PCIe hotplug controller
* @pwr: one of:
* PCI_EXP_SLTCTL_PWR_IND_ON
* PCI_EXP_SLTCTL_PWR_IND_BLINK
* PCI_EXP_SLTCTL_PWR_IND_OFF
* @attn: one of:
* PCI_EXP_SLTCTL_ATTN_IND_ON
* PCI_EXP_SLTCTL_ATTN_IND_BLINK
* PCI_EXP_SLTCTL_ATTN_IND_OFF
*
* Either @pwr or @attn can also be INDICATOR_NOOP to leave that indicator
* unchanged.
*/
void pciehp_set_indicators(struct controller *ctrl, int pwr, int attn)
{ {
u16 slot_cmd; u16 cmd = 0, mask = 0;
if (!ATTN_LED(ctrl)) if (PWR_LED(ctrl) && pwr != INDICATOR_NOOP) {
return; cmd |= (pwr & PCI_EXP_SLTCTL_PIC);
mask |= PCI_EXP_SLTCTL_PIC;
switch (value) {
case 0: /* turn off */
slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_OFF;
break;
case 1: /* turn on */
slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_ON;
break;
case 2: /* turn blink */
slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_BLINK;
break;
default:
return;
} }
pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);
}
void pciehp_green_led_on(struct controller *ctrl) if (ATTN_LED(ctrl) && attn != INDICATOR_NOOP) {
{ cmd |= (attn & PCI_EXP_SLTCTL_AIC);
if (!PWR_LED(ctrl)) mask |= PCI_EXP_SLTCTL_AIC;
return; }
pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
PCI_EXP_SLTCTL_PIC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_PWR_IND_ON);
}
void pciehp_green_led_off(struct controller *ctrl)
{
if (!PWR_LED(ctrl))
return;
pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
PCI_EXP_SLTCTL_PIC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_PWR_IND_OFF);
}
void pciehp_green_led_blink(struct controller *ctrl)
{
if (!PWR_LED(ctrl))
return;
pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, if (cmd) {
PCI_EXP_SLTCTL_PIC); pcie_write_cmd_nowait(ctrl, cmd, mask);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd);
PCI_EXP_SLTCTL_PWR_IND_BLINK); }
} }
int pciehp_power_on_slot(struct controller *ctrl) int pciehp_power_on_slot(struct controller *ctrl)
...@@ -638,8 +613,8 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) ...@@ -638,8 +613,8 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) {
ctrl->power_fault_detected = 1; ctrl->power_fault_detected = 1;
ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl)); ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
pciehp_set_attention_status(ctrl, 1); pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
pciehp_green_led_off(ctrl); PCI_EXP_SLTCTL_ATTN_IND_ON);
} }
/* /*
......
...@@ -473,7 +473,6 @@ int __init rpadlpar_io_init(void) ...@@ -473,7 +473,6 @@ int __init rpadlpar_io_init(void)
void rpadlpar_io_exit(void) void rpadlpar_io_exit(void)
{ {
dlpar_sysfs_exit(); dlpar_sysfs_exit();
return;
} }
module_init(rpadlpar_io_init); module_init(rpadlpar_io_init);
......
...@@ -408,7 +408,6 @@ static void __exit cleanup_slots(void) ...@@ -408,7 +408,6 @@ static void __exit cleanup_slots(void)
pci_hp_deregister(&slot->hotplug_slot); pci_hp_deregister(&slot->hotplug_slot);
dealloc_slot_struct(slot); dealloc_slot_struct(slot);
} }
return;
} }
static int __init rpaphp_init(void) static int __init rpaphp_init(void)
......
...@@ -240,6 +240,173 @@ void pci_iov_remove_virtfn(struct pci_dev *dev, int id) ...@@ -240,6 +240,173 @@ void pci_iov_remove_virtfn(struct pci_dev *dev, int id)
pci_dev_put(dev); pci_dev_put(dev);
} }
static ssize_t sriov_totalvfs_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pci_sriov_get_totalvfs(pdev));
}
static ssize_t sriov_numvfs_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pdev->sriov->num_VFs);
}
/*
* num_vfs > 0; number of VFs to enable
* num_vfs = 0; disable all VFs
*
* Note: SRIOV spec does not allow partial VF
* disable, so it's all or none.
*/
static ssize_t sriov_numvfs_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
int ret;
u16 num_vfs;
ret = kstrtou16(buf, 0, &num_vfs);
if (ret < 0)
return ret;
if (num_vfs > pci_sriov_get_totalvfs(pdev))
return -ERANGE;
device_lock(&pdev->dev);
if (num_vfs == pdev->sriov->num_VFs)
goto exit;
/* is PF driver loaded w/callback */
if (!pdev->driver || !pdev->driver->sriov_configure) {
pci_info(pdev, "Driver does not support SRIOV configuration via sysfs\n");
ret = -ENOENT;
goto exit;
}
if (num_vfs == 0) {
/* disable VFs */
ret = pdev->driver->sriov_configure(pdev, 0);
goto exit;
}
/* enable VFs */
if (pdev->sriov->num_VFs) {
pci_warn(pdev, "%d VFs already enabled. Disable before enabling %d VFs\n",
pdev->sriov->num_VFs, num_vfs);
ret = -EBUSY;
goto exit;
}
ret = pdev->driver->sriov_configure(pdev, num_vfs);
if (ret < 0)
goto exit;
if (ret != num_vfs)
pci_warn(pdev, "%d VFs requested; only %d enabled\n",
num_vfs, ret);
exit:
device_unlock(&pdev->dev);
if (ret < 0)
return ret;
return count;
}
static ssize_t sriov_offset_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pdev->sriov->offset);
}
static ssize_t sriov_stride_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pdev->sriov->stride);
}
static ssize_t sriov_vf_device_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%x\n", pdev->sriov->vf_device);
}
static ssize_t sriov_drivers_autoprobe_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pdev->sriov->drivers_autoprobe);
}
static ssize_t sriov_drivers_autoprobe_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
bool drivers_autoprobe;
if (kstrtobool(buf, &drivers_autoprobe) < 0)
return -EINVAL;
pdev->sriov->drivers_autoprobe = drivers_autoprobe;
return count;
}
static DEVICE_ATTR_RO(sriov_totalvfs);
static DEVICE_ATTR_RW(sriov_numvfs);
static DEVICE_ATTR_RO(sriov_offset);
static DEVICE_ATTR_RO(sriov_stride);
static DEVICE_ATTR_RO(sriov_vf_device);
static DEVICE_ATTR_RW(sriov_drivers_autoprobe);
static struct attribute *sriov_dev_attrs[] = {
&dev_attr_sriov_totalvfs.attr,
&dev_attr_sriov_numvfs.attr,
&dev_attr_sriov_offset.attr,
&dev_attr_sriov_stride.attr,
&dev_attr_sriov_vf_device.attr,
&dev_attr_sriov_drivers_autoprobe.attr,
NULL,
};
static umode_t sriov_attrs_are_visible(struct kobject *kobj,
struct attribute *a, int n)
{
struct device *dev = kobj_to_dev(kobj);
if (!dev_is_pf(dev))
return 0;
return a->mode;
}
const struct attribute_group sriov_dev_attr_group = {
.attrs = sriov_dev_attrs,
.is_visible = sriov_attrs_are_visible,
};
int __weak pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) int __weak pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs)
{ {
return 0; return 0;
...@@ -557,8 +724,8 @@ static void sriov_restore_state(struct pci_dev *dev) ...@@ -557,8 +724,8 @@ static void sriov_restore_state(struct pci_dev *dev)
ctrl |= iov->ctrl & PCI_SRIOV_CTRL_ARI; ctrl |= iov->ctrl & PCI_SRIOV_CTRL_ARI;
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, ctrl); pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, ctrl);
for (i = PCI_IOV_RESOURCES; i <= PCI_IOV_RESOURCE_END; i++) for (i = 0; i < PCI_SRIOV_NUM_BARS; i++)
pci_update_resource(dev, i); pci_update_resource(dev, i + PCI_IOV_RESOURCES);
pci_write_config_dword(dev, iov->pos + PCI_SRIOV_SYS_PGSIZE, iov->pgsz); pci_write_config_dword(dev, iov->pos + PCI_SRIOV_SYS_PGSIZE, iov->pgsz);
pci_iov_set_numvfs(dev, iov->num_VFs); pci_iov_set_numvfs(dev, iov->num_VFs);
......
...@@ -353,7 +353,7 @@ EXPORT_SYMBOL_GPL(devm_of_pci_get_host_bridge_resources); ...@@ -353,7 +353,7 @@ EXPORT_SYMBOL_GPL(devm_of_pci_get_host_bridge_resources);
/** /**
* of_irq_parse_pci - Resolve the interrupt for a PCI device * of_irq_parse_pci - Resolve the interrupt for a PCI device
* @pdev: the device whose interrupt is to be resolved * @pdev: the device whose interrupt is to be resolved
* @out_irq: structure of_irq filled by this function * @out_irq: structure of_phandle_args filled by this function
* *
* This function resolves the PCI interrupt for a given PCI device. If a * This function resolves the PCI interrupt for a given PCI device. If a
* device-node exists for a given pci_dev, it will use normal OF tree * device-node exists for a given pci_dev, it will use normal OF tree
......
...@@ -18,13 +18,32 @@ ...@@ -18,13 +18,32 @@
#include <linux/percpu-refcount.h> #include <linux/percpu-refcount.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/seq_buf.h> #include <linux/seq_buf.h>
#include <linux/iommu.h> #include <linux/xarray.h>
enum pci_p2pdma_map_type {
PCI_P2PDMA_MAP_UNKNOWN = 0,
PCI_P2PDMA_MAP_NOT_SUPPORTED,
PCI_P2PDMA_MAP_BUS_ADDR,
PCI_P2PDMA_MAP_THRU_HOST_BRIDGE,
};
struct pci_p2pdma { struct pci_p2pdma {
struct gen_pool *pool; struct gen_pool *pool;
bool p2pmem_published; bool p2pmem_published;
struct xarray map_types;
}; };
struct pci_p2pdma_pagemap {
struct dev_pagemap pgmap;
struct pci_dev *provider;
u64 bus_offset;
};
static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap)
{
return container_of(pgmap, struct pci_p2pdma_pagemap, pgmap);
}
static ssize_t size_show(struct device *dev, struct device_attribute *attr, static ssize_t size_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
...@@ -87,6 +106,7 @@ static void pci_p2pdma_release(void *data) ...@@ -87,6 +106,7 @@ static void pci_p2pdma_release(void *data)
gen_pool_destroy(p2pdma->pool); gen_pool_destroy(p2pdma->pool);
sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group);
xa_destroy(&p2pdma->map_types);
} }
static int pci_p2pdma_setup(struct pci_dev *pdev) static int pci_p2pdma_setup(struct pci_dev *pdev)
...@@ -98,6 +118,8 @@ static int pci_p2pdma_setup(struct pci_dev *pdev) ...@@ -98,6 +118,8 @@ static int pci_p2pdma_setup(struct pci_dev *pdev)
if (!p2p) if (!p2p)
return -ENOMEM; return -ENOMEM;
xa_init(&p2p->map_types);
p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev));
if (!p2p->pool) if (!p2p->pool)
goto out; goto out;
...@@ -135,6 +157,7 @@ static int pci_p2pdma_setup(struct pci_dev *pdev) ...@@ -135,6 +157,7 @@ static int pci_p2pdma_setup(struct pci_dev *pdev)
int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
u64 offset) u64 offset)
{ {
struct pci_p2pdma_pagemap *p2p_pgmap;
struct dev_pagemap *pgmap; struct dev_pagemap *pgmap;
void *addr; void *addr;
int error; int error;
...@@ -157,14 +180,18 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, ...@@ -157,14 +180,18 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
return error; return error;
} }
pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL); p2p_pgmap = devm_kzalloc(&pdev->dev, sizeof(*p2p_pgmap), GFP_KERNEL);
if (!pgmap) if (!p2p_pgmap)
return -ENOMEM; return -ENOMEM;
pgmap = &p2p_pgmap->pgmap;
pgmap->res.start = pci_resource_start(pdev, bar) + offset; pgmap->res.start = pci_resource_start(pdev, bar) + offset;
pgmap->res.end = pgmap->res.start + size - 1; pgmap->res.end = pgmap->res.start + size - 1;
pgmap->res.flags = pci_resource_flags(pdev, bar); pgmap->res.flags = pci_resource_flags(pdev, bar);
pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; pgmap->type = MEMORY_DEVICE_PCI_P2PDMA;
pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) -
p2p_pgmap->provider = pdev;
p2p_pgmap->bus_offset = pci_bus_address(pdev, bar) -
pci_resource_start(pdev, bar); pci_resource_start(pdev, bar);
addr = devm_memremap_pages(&pdev->dev, pgmap); addr = devm_memremap_pages(&pdev->dev, pgmap);
...@@ -246,19 +273,32 @@ static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev) ...@@ -246,19 +273,32 @@ static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev)
seq_buf_printf(buf, "%s;", pci_name(pdev)); seq_buf_printf(buf, "%s;", pci_name(pdev));
} }
/* static const struct pci_p2pdma_whitelist_entry {
* If we can't find a common upstream bridge take a look at the root unsigned short vendor;
* complex and compare it to a whitelist of known good hardware. unsigned short device;
*/ enum {
static bool root_complex_whitelist(struct pci_dev *dev) REQ_SAME_HOST_BRIDGE = 1 << 0,
} flags;
} pci_p2pdma_whitelist[] = {
/* AMD ZEN */
{PCI_VENDOR_ID_AMD, 0x1450, 0},
/* Intel Xeon E5/Core i7 */
{PCI_VENDOR_ID_INTEL, 0x3c00, REQ_SAME_HOST_BRIDGE},
{PCI_VENDOR_ID_INTEL, 0x3c01, REQ_SAME_HOST_BRIDGE},
/* Intel Xeon E7 v3/Xeon E5 v3/Core i7 */
{PCI_VENDOR_ID_INTEL, 0x2f00, REQ_SAME_HOST_BRIDGE},
{PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE},
{}
};
static bool __host_bridge_whitelist(struct pci_host_bridge *host,
bool same_host_bridge)
{ {
struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0)); struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0));
const struct pci_p2pdma_whitelist_entry *entry;
unsigned short vendor, device; unsigned short vendor, device;
if (iommu_present(dev->dev.bus))
return false;
if (!root) if (!root)
return false; return false;
...@@ -266,65 +306,49 @@ static bool root_complex_whitelist(struct pci_dev *dev) ...@@ -266,65 +306,49 @@ static bool root_complex_whitelist(struct pci_dev *dev)
device = root->device; device = root->device;
pci_dev_put(root); pci_dev_put(root);
/* AMD ZEN host bridges can do peer to peer */ for (entry = pci_p2pdma_whitelist; entry->vendor; entry++) {
if (vendor == PCI_VENDOR_ID_AMD && device == 0x1450) if (vendor != entry->vendor || device != entry->device)
continue;
if (entry->flags & REQ_SAME_HOST_BRIDGE && !same_host_bridge)
return false;
return true; return true;
}
return false; return false;
} }
/* /*
* Find the distance through the nearest common upstream bridge between * If we can't find a common upstream bridge take a look at the root
* two PCI devices. * complex and compare it to a whitelist of known good hardware.
*
* If the two devices are the same device then 0 will be returned.
*
* If there are two virtual functions of the same device behind the same
* bridge port then 2 will be returned (one step down to the PCIe switch,
* then one step back to the same device).
*
* In the case where two devices are connected to the same PCIe switch, the
* value 4 will be returned. This corresponds to the following PCI tree:
*
* -+ Root Port
* \+ Switch Upstream Port
* +-+ Switch Downstream Port
* + \- Device A
* \-+ Switch Downstream Port
* \- Device B
*
* The distance is 4 because we traverse from Device A through the downstream
* port of the switch, to the common upstream port, back up to the second
* downstream port and then to Device B.
*
* Any two devices that don't have a common upstream bridge will return -1.
* In this way devices on separate PCIe root ports will be rejected, which
* is what we want for peer-to-peer seeing each PCIe root port defines a
* separate hierarchy domain and there's no way to determine whether the root
* complex supports forwarding between them.
*
* In the case where two devices are connected to different PCIe switches,
* this function will still return a positive distance as long as both
* switches eventually have a common upstream bridge. Note this covers
* the case of using multiple PCIe switches to achieve a desired level of
* fan-out from a root port. The exact distance will be a function of the
* number of switches between Device A and Device B.
*
* If a bridge which has any ACS redirection bits set is in the path
* then this functions will return -2. This is so we reject any
* cases where the TLPs are forwarded up into the root complex.
* In this case, a list of all infringing bridge addresses will be
* populated in acs_list (assuming it's non-null) for printk purposes.
*/ */
static int upstream_bridge_distance(struct pci_dev *provider, static bool host_bridge_whitelist(struct pci_dev *a, struct pci_dev *b)
struct pci_dev *client, {
struct seq_buf *acs_list) struct pci_host_bridge *host_a = pci_find_host_bridge(a->bus);
struct pci_host_bridge *host_b = pci_find_host_bridge(b->bus);
if (host_a == host_b)
return __host_bridge_whitelist(host_a, true);
if (__host_bridge_whitelist(host_a, false) &&
__host_bridge_whitelist(host_b, false))
return true;
return false;
}
static enum pci_p2pdma_map_type
__upstream_bridge_distance(struct pci_dev *provider, struct pci_dev *client,
int *dist, bool *acs_redirects, struct seq_buf *acs_list)
{ {
struct pci_dev *a = provider, *b = client, *bb; struct pci_dev *a = provider, *b = client, *bb;
int dist_a = 0; int dist_a = 0;
int dist_b = 0; int dist_b = 0;
int acs_cnt = 0; int acs_cnt = 0;
if (acs_redirects)
*acs_redirects = false;
/* /*
* Note, we don't need to take references to devices returned by * Note, we don't need to take references to devices returned by
* pci_upstream_bridge() seeing we hold a reference to a child * pci_upstream_bridge() seeing we hold a reference to a child
...@@ -353,15 +377,10 @@ static int upstream_bridge_distance(struct pci_dev *provider, ...@@ -353,15 +377,10 @@ static int upstream_bridge_distance(struct pci_dev *provider,
dist_a++; dist_a++;
} }
/* if (dist)
* Allow the connection if both devices are on a whitelisted root *dist = dist_a + dist_b;
* complex, but add an arbitrary large value to the distance.
*/
if (root_complex_whitelist(provider) &&
root_complex_whitelist(client))
return 0x1000 + dist_a + dist_b;
return -1; return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE;
check_b_path_acs: check_b_path_acs:
bb = b; bb = b;
...@@ -378,33 +397,110 @@ static int upstream_bridge_distance(struct pci_dev *provider, ...@@ -378,33 +397,110 @@ static int upstream_bridge_distance(struct pci_dev *provider,
bb = pci_upstream_bridge(bb); bb = pci_upstream_bridge(bb);
} }
if (acs_cnt) if (dist)
return -2; *dist = dist_a + dist_b;
if (acs_cnt) {
if (acs_redirects)
*acs_redirects = true;
return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE;
}
return PCI_P2PDMA_MAP_BUS_ADDR;
}
static unsigned long map_types_idx(struct pci_dev *client)
{
return (pci_domain_nr(client->bus) << 16) |
(client->bus->number << 8) | client->devfn;
}
/*
* Find the distance through the nearest common upstream bridge between
* two PCI devices.
*
* If the two devices are the same device then 0 will be returned.
*
* If there are two virtual functions of the same device behind the same
* bridge port then 2 will be returned (one step down to the PCIe switch,
* then one step back to the same device).
*
* In the case where two devices are connected to the same PCIe switch, the
* value 4 will be returned. This corresponds to the following PCI tree:
*
* -+ Root Port
* \+ Switch Upstream Port
* +-+ Switch Downstream Port
* + \- Device A
* \-+ Switch Downstream Port
* \- Device B
*
* The distance is 4 because we traverse from Device A through the downstream
* port of the switch, to the common upstream port, back up to the second
* downstream port and then to Device B.
*
* Any two devices that cannot communicate using p2pdma will return
* PCI_P2PDMA_MAP_NOT_SUPPORTED.
*
* Any two devices that have a data path that goes through the host bridge
* will consult a whitelist. If the host bridges are on the whitelist,
* this function will return PCI_P2PDMA_MAP_THRU_HOST_BRIDGE.
*
* If either bridge is not on the whitelist this function returns
* PCI_P2PDMA_MAP_NOT_SUPPORTED.
*
* If a bridge which has any ACS redirection bits set is in the path,
* acs_redirects will be set to true. In this case, a list of all infringing
* bridge addresses will be populated in acs_list (assuming it's non-null)
* for printk purposes.
*/
static enum pci_p2pdma_map_type
upstream_bridge_distance(struct pci_dev *provider, struct pci_dev *client,
int *dist, bool *acs_redirects, struct seq_buf *acs_list)
{
enum pci_p2pdma_map_type map_type;
map_type = __upstream_bridge_distance(provider, client, dist,
acs_redirects, acs_list);
if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE) {
if (!host_bridge_whitelist(provider, client))
map_type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
}
if (provider->p2pdma)
xa_store(&provider->p2pdma->map_types, map_types_idx(client),
xa_mk_value(map_type), GFP_KERNEL);
return dist_a + dist_b; return map_type;
} }
static int upstream_bridge_distance_warn(struct pci_dev *provider, static enum pci_p2pdma_map_type
struct pci_dev *client) upstream_bridge_distance_warn(struct pci_dev *provider, struct pci_dev *client,
int *dist)
{ {
struct seq_buf acs_list; struct seq_buf acs_list;
bool acs_redirects;
int ret; int ret;
seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE);
if (!acs_list.buffer) if (!acs_list.buffer)
return -ENOMEM; return -ENOMEM;
ret = upstream_bridge_distance(provider, client, &acs_list); ret = upstream_bridge_distance(provider, client, dist, &acs_redirects,
if (ret == -2) { &acs_list);
pci_warn(client, "cannot be used for peer-to-peer DMA as ACS redirect is set between the client and provider (%s)\n", if (acs_redirects) {
pci_warn(client, "ACS redirect is set between the client and provider (%s)\n",
pci_name(provider)); pci_name(provider));
/* Drop final semicolon */ /* Drop final semicolon */
acs_list.buffer[acs_list.len-1] = 0; acs_list.buffer[acs_list.len-1] = 0;
pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n",
acs_list.buffer); acs_list.buffer);
}
} else if (ret < 0) { if (ret == PCI_P2PDMA_MAP_NOT_SUPPORTED) {
pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge\n", pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge or whitelisted host bridge\n",
pci_name(provider)); pci_name(provider));
} }
...@@ -421,22 +517,22 @@ static int upstream_bridge_distance_warn(struct pci_dev *provider, ...@@ -421,22 +517,22 @@ static int upstream_bridge_distance_warn(struct pci_dev *provider,
* @num_clients: number of clients in the array * @num_clients: number of clients in the array
* @verbose: if true, print warnings for devices when we return -1 * @verbose: if true, print warnings for devices when we return -1
* *
* Returns -1 if any of the clients are not compatible (behind the same * Returns -1 if any of the clients are not compatible, otherwise returns a
* root port as the provider), otherwise returns a positive number where * positive number where a lower number is the preferable choice. (If there's
* a lower number is the preferable choice. (If there's one client * one client that's the same as the provider it will return 0, which is best
* that's the same as the provider it will return 0, which is best choice). * choice).
* *
* For now, "compatible" means the provider and the clients are all behind * "compatible" means the provider and the clients are either all behind
* the same PCI root port. This cuts out cases that may work but is safest * the same PCI root port or the host bridges connected to each of the devices
* for the user. Future work can expand this to white-list root complexes that * are listed in the 'pci_p2pdma_whitelist'.
* can safely forward between each ports.
*/ */
int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients,
int num_clients, bool verbose) int num_clients, bool verbose)
{ {
bool not_supported = false; bool not_supported = false;
struct pci_dev *pci_client; struct pci_dev *pci_client;
int distance = 0; int total_dist = 0;
int distance;
int i, ret; int i, ret;
if (num_clients == 0) if (num_clients == 0)
...@@ -461,26 +557,26 @@ int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, ...@@ -461,26 +557,26 @@ int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients,
if (verbose) if (verbose)
ret = upstream_bridge_distance_warn(provider, ret = upstream_bridge_distance_warn(provider,
pci_client); pci_client, &distance);
else else
ret = upstream_bridge_distance(provider, pci_client, ret = upstream_bridge_distance(provider, pci_client,
NULL); &distance, NULL, NULL);
pci_dev_put(pci_client); pci_dev_put(pci_client);
if (ret < 0) if (ret == PCI_P2PDMA_MAP_NOT_SUPPORTED)
not_supported = true; not_supported = true;
if (not_supported && !verbose) if (not_supported && !verbose)
break; break;
distance += ret; total_dist += distance;
} }
if (not_supported) if (not_supported)
return -1; return -1;
return distance; return total_dist;
} }
EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many); EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many);
...@@ -706,21 +802,19 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) ...@@ -706,21 +802,19 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish)
} }
EXPORT_SYMBOL_GPL(pci_p2pmem_publish); EXPORT_SYMBOL_GPL(pci_p2pmem_publish);
/** static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct pci_dev *provider,
* pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA struct pci_dev *client)
* @dev: device doing the DMA request {
* @sg: scatter list to map if (!provider->p2pdma)
* @nents: elements in the scatterlist return PCI_P2PDMA_MAP_NOT_SUPPORTED;
* @dir: DMA direction
* return xa_to_value(xa_load(&provider->p2pdma->map_types,
* Scatterlists mapped with this function should not be unmapped in any way. map_types_idx(client)));
* }
* Returns the number of SG entries mapped or 0 on error.
*/ static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap,
int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, struct device *dev, struct scatterlist *sg, int nents)
enum dma_data_direction dir)
{ {
struct dev_pagemap *pgmap;
struct scatterlist *s; struct scatterlist *s;
phys_addr_t paddr; phys_addr_t paddr;
int i; int i;
...@@ -736,16 +830,80 @@ int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, ...@@ -736,16 +830,80 @@ int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
return 0; return 0;
for_each_sg(sg, s, nents, i) { for_each_sg(sg, s, nents, i) {
pgmap = sg_page(s)->pgmap;
paddr = sg_phys(s); paddr = sg_phys(s);
s->dma_address = paddr - pgmap->pci_p2pdma_bus_offset; s->dma_address = paddr - p2p_pgmap->bus_offset;
sg_dma_len(s) = s->length; sg_dma_len(s) = s->length;
} }
return nents; return nents;
} }
EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg);
/**
* pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA
* @dev: device doing the DMA request
* @sg: scatter list to map
* @nents: elements in the scatterlist
* @dir: DMA direction
* @attrs: DMA attributes passed to dma_map_sg() (if called)
*
* Scatterlists mapped with this function should be unmapped using
* pci_p2pdma_unmap_sg_attrs().
*
* Returns the number of SG entries mapped or 0 on error.
*/
int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct pci_p2pdma_pagemap *p2p_pgmap =
to_p2p_pgmap(sg_page(sg)->pgmap);
struct pci_dev *client;
if (WARN_ON_ONCE(!dev_is_pci(dev)))
return 0;
client = to_pci_dev(dev);
switch (pci_p2pdma_map_type(p2p_pgmap->provider, client)) {
case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
return dma_map_sg_attrs(dev, sg, nents, dir, attrs);
case PCI_P2PDMA_MAP_BUS_ADDR:
return __pci_p2pdma_map_sg(p2p_pgmap, dev, sg, nents);
default:
WARN_ON_ONCE(1);
return 0;
}
}
EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg_attrs);
/**
* pci_p2pdma_unmap_sg - unmap a PCI peer-to-peer scatterlist that was
* mapped with pci_p2pdma_map_sg()
* @dev: device doing the DMA request
* @sg: scatter list to map
* @nents: number of elements returned by pci_p2pdma_map_sg()
* @dir: DMA direction
* @attrs: DMA attributes passed to dma_unmap_sg() (if called)
*/
void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct pci_p2pdma_pagemap *p2p_pgmap =
to_p2p_pgmap(sg_page(sg)->pgmap);
enum pci_p2pdma_map_type map_type;
struct pci_dev *client;
if (WARN_ON_ONCE(!dev_is_pci(dev)))
return;
client = to_pci_dev(dev);
map_type = pci_p2pdma_map_type(p2p_pgmap->provider, client);
if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE)
dma_unmap_sg_attrs(dev, sg, nents, dir, attrs);
}
EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs);
/** /**
* pci_p2pdma_enable_store - parse a configfs/sysfs attribute store * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store
......
此差异已折叠。
...@@ -38,7 +38,7 @@ struct pci_bridge_reg_behavior { ...@@ -38,7 +38,7 @@ struct pci_bridge_reg_behavior {
u32 rsvd; u32 rsvd;
}; };
const static struct pci_bridge_reg_behavior pci_regs_behavior[] = { static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
[PCI_VENDOR_ID / 4] = { .ro = ~0 }, [PCI_VENDOR_ID / 4] = { .ro = ~0 },
[PCI_COMMAND / 4] = { [PCI_COMMAND / 4] = {
.rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
...@@ -173,7 +173,7 @@ const static struct pci_bridge_reg_behavior pci_regs_behavior[] = { ...@@ -173,7 +173,7 @@ const static struct pci_bridge_reg_behavior pci_regs_behavior[] = {
}, },
}; };
const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
[PCI_CAP_LIST_ID / 4] = { [PCI_CAP_LIST_ID / 4] = {
/* /*
* Capability ID, Next Capability Pointer and * Capability ID, Next Capability Pointer and
......
此差异已折叠。
...@@ -890,8 +890,8 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) ...@@ -890,8 +890,8 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK); dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
if (dev->current_state != state && printk_ratelimit()) if (dev->current_state != state)
pci_info(dev, "Refused to change power state, currently in D%d\n", pci_info_ratelimited(dev, "Refused to change power state, currently in D%d\n",
dev->current_state); dev->current_state);
/* /*
...@@ -1443,7 +1443,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev) ...@@ -1443,7 +1443,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX; bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
res = pdev->resource + bar_idx; res = pdev->resource + bar_idx;
size = order_base_2((resource_size(res) >> 20) | 1) - 1; size = ilog2(resource_size(res)) - 20;
ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT; ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT;
pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
...@@ -3581,7 +3581,7 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask) ...@@ -3581,7 +3581,7 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask)
} }
/* Ensure upstream ports don't block AtomicOps on egress */ /* Ensure upstream ports don't block AtomicOps on egress */
if (!bridge->has_secondary_link) { if (pci_pcie_type(bridge) == PCI_EXP_TYPE_UPSTREAM) {
pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2, pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2,
&ctl2); &ctl2);
if (ctl2 & PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK) if (ctl2 & PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK)
...@@ -5923,8 +5923,19 @@ resource_size_t __weak pcibios_default_alignment(void) ...@@ -5923,8 +5923,19 @@ resource_size_t __weak pcibios_default_alignment(void)
return 0; return 0;
} }
#define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE /*
static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0}; * Arches that don't want to expose struct resource to userland as-is in
* sysfs and /proc can implement their own pci_resource_to_user().
*/
void __weak pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end)
{
*start = rsrc->start;
*end = rsrc->end;
}
static char *resource_alignment_param;
static DEFINE_SPINLOCK(resource_alignment_lock); static DEFINE_SPINLOCK(resource_alignment_lock);
/** /**
...@@ -5945,7 +5956,7 @@ static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev, ...@@ -5945,7 +5956,7 @@ static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev,
spin_lock(&resource_alignment_lock); spin_lock(&resource_alignment_lock);
p = resource_alignment_param; p = resource_alignment_param;
if (!*p && !align) if (!p || !*p)
goto out; goto out;
if (pci_has_flag(PCI_PROBE_ONLY)) { if (pci_has_flag(PCI_PROBE_ONLY)) {
align = 0; align = 0;
...@@ -6109,35 +6120,41 @@ void pci_reassigndev_resource_alignment(struct pci_dev *dev) ...@@ -6109,35 +6120,41 @@ void pci_reassigndev_resource_alignment(struct pci_dev *dev)
} }
} }
static ssize_t pci_set_resource_alignment_param(const char *buf, size_t count) static ssize_t resource_alignment_show(struct bus_type *bus, char *buf)
{ {
if (count > RESOURCE_ALIGNMENT_PARAM_SIZE - 1) size_t count = 0;
count = RESOURCE_ALIGNMENT_PARAM_SIZE - 1;
spin_lock(&resource_alignment_lock);
strncpy(resource_alignment_param, buf, count);
resource_alignment_param[count] = '\0';
spin_unlock(&resource_alignment_lock);
return count;
}
static ssize_t pci_get_resource_alignment_param(char *buf, size_t size)
{
size_t count;
spin_lock(&resource_alignment_lock); spin_lock(&resource_alignment_lock);
count = snprintf(buf, size, "%s", resource_alignment_param); if (resource_alignment_param)
count = snprintf(buf, PAGE_SIZE, "%s", resource_alignment_param);
spin_unlock(&resource_alignment_lock); spin_unlock(&resource_alignment_lock);
return count;
}
static ssize_t resource_alignment_show(struct bus_type *bus, char *buf) /*
{ * When set by the command line, resource_alignment_param will not
return pci_get_resource_alignment_param(buf, PAGE_SIZE); * have a trailing line feed, which is ugly. So conditionally add
* it here.
*/
if (count >= 2 && buf[count - 2] != '\n' && count < PAGE_SIZE - 1) {
buf[count - 1] = '\n';
buf[count++] = 0;
}
return count;
} }
static ssize_t resource_alignment_store(struct bus_type *bus, static ssize_t resource_alignment_store(struct bus_type *bus,
const char *buf, size_t count) const char *buf, size_t count)
{ {
return pci_set_resource_alignment_param(buf, count); char *param = kstrndup(buf, count, GFP_KERNEL);
if (!param)
return -ENOMEM;
spin_lock(&resource_alignment_lock);
kfree(resource_alignment_param);
resource_alignment_param = param;
spin_unlock(&resource_alignment_lock);
return count;
} }
static BUS_ATTR_RW(resource_alignment); static BUS_ATTR_RW(resource_alignment);
...@@ -6266,8 +6283,7 @@ static int __init pci_setup(char *str) ...@@ -6266,8 +6283,7 @@ static int __init pci_setup(char *str)
} else if (!strncmp(str, "cbmemsize=", 10)) { } else if (!strncmp(str, "cbmemsize=", 10)) {
pci_cardbus_mem_size = memparse(str + 10, &str); pci_cardbus_mem_size = memparse(str + 10, &str);
} else if (!strncmp(str, "resource_alignment=", 19)) { } else if (!strncmp(str, "resource_alignment=", 19)) {
pci_set_resource_alignment_param(str + 19, resource_alignment_param = str + 19;
strlen(str + 19));
} else if (!strncmp(str, "ecrc=", 5)) { } else if (!strncmp(str, "ecrc=", 5)) {
pcie_ecrc_get_policy(str + 5); pcie_ecrc_get_policy(str + 5);
} else if (!strncmp(str, "hpiosize=", 9)) { } else if (!strncmp(str, "hpiosize=", 9)) {
...@@ -6302,15 +6318,18 @@ static int __init pci_setup(char *str) ...@@ -6302,15 +6318,18 @@ static int __init pci_setup(char *str)
early_param("pci", pci_setup); early_param("pci", pci_setup);
/* /*
* 'disable_acs_redir_param' is initialized in pci_setup(), above, to point * 'resource_alignment_param' and 'disable_acs_redir_param' are initialized
* to data in the __initdata section which will be freed after the init * in pci_setup(), above, to point to data in the __initdata section which
* sequence is complete. We can't allocate memory in pci_setup() because some * will be freed after the init sequence is complete. We can't allocate memory
* architectures do not have any memory allocation service available during * in pci_setup() because some architectures do not have any memory allocation
* an early_param() call. So we allocate memory and copy the variable here * service available during an early_param() call. So we allocate memory and
* before the init section is freed. * copy the variable here before the init section is freed.
*
*/ */
static int __init pci_realloc_setup_params(void) static int __init pci_realloc_setup_params(void)
{ {
resource_alignment_param = kstrdup(resource_alignment_param,
GFP_KERNEL);
disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL); disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL);
return 0; return 0;
......
此差异已折叠。
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/pci-aspm.h>
#include "../pci.h" #include "../pci.h"
#ifdef MODULE_PARAM_PREFIX #ifdef MODULE_PARAM_PREFIX
...@@ -913,10 +912,10 @@ void pcie_aspm_init_link_state(struct pci_dev *pdev) ...@@ -913,10 +912,10 @@ void pcie_aspm_init_link_state(struct pci_dev *pdev)
/* /*
* We allocate pcie_link_state for the component on the upstream * We allocate pcie_link_state for the component on the upstream
* end of a Link, so there's nothing to do unless this device has a * end of a Link, so there's nothing to do unless this device is
* Link on its secondary side. * downstream port.
*/ */
if (!pdev->has_secondary_link) if (!pcie_downstream_port(pdev))
return; return;
/* VIA has a strange chipset, root port is under a bridge */ /* VIA has a strange chipset, root port is under a bridge */
...@@ -1070,7 +1069,7 @@ static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) ...@@ -1070,7 +1069,7 @@ static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
if (!pci_is_pcie(pdev)) if (!pci_is_pcie(pdev))
return 0; return 0;
if (pdev->has_secondary_link) if (pcie_downstream_port(pdev))
parent = pdev; parent = pdev;
if (!parent || !parent->link_state) if (!parent || !parent->link_state)
return -EINVAL; return -EINVAL;
......
...@@ -166,7 +166,7 @@ static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) ...@@ -166,7 +166,7 @@ static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service)
driver = pcie_port_find_service(dev, service); driver = pcie_port_find_service(dev, service);
if (driver && driver->reset_link) { if (driver && driver->reset_link) {
status = driver->reset_link(dev); status = driver->reset_link(dev);
} else if (dev->has_secondary_link) { } else if (pcie_downstream_port(dev)) {
status = default_reset_link(dev); status = default_reset_link(dev);
} else { } else {
pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n",
......
此差异已折叠。
此差异已折叠。
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include "pci.h" #include "pci.h"
DECLARE_RWSEM(pci_bus_sem); DECLARE_RWSEM(pci_bus_sem);
EXPORT_SYMBOL_GPL(pci_bus_sem);
/* /*
* pci_for_each_dma_alias - Iterate over DMA aliases for a device * pci_for_each_dma_alias - Iterate over DMA aliases for a device
......
...@@ -1662,8 +1662,8 @@ static int iov_resources_unassigned(struct pci_dev *dev, void *data) ...@@ -1662,8 +1662,8 @@ static int iov_resources_unassigned(struct pci_dev *dev, void *data)
int i; int i;
bool *unassigned = data; bool *unassigned = data;
for (i = PCI_IOV_RESOURCES; i <= PCI_IOV_RESOURCE_END; i++) { for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
struct resource *r = &dev->resource[i]; struct resource *r = &dev->resource[i + PCI_IOV_RESOURCES];
struct pci_bus_region region; struct pci_bus_region region;
/* Not assigned or rejected by kernel? */ /* Not assigned or rejected by kernel? */
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册