提交 991688bf 编写于 作者: L Linus Torvalds

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Arnd Bergmann:
 "Driver updates for ARM SoCs, including a couple of newly added
  drivers:

   - A new driver for the power management controller on TI Keystone

   - Support for the prerelease "SCPI" firmware protocol that ended up
     being shipped by Amlogic in their GXBB SoC.

   - A soc_device can now be matched using a glob from inside the
     kernel, when another driver wants to know the specific chip it is
     running on and cannot find out from DT, firmware or hardware.

   - Renesas SoCs now support identification through the soc_device
     interface, both in user space and kernel.

   - Renesas r8a7743 and r8a7745 gain support for their system
     controller

   - A new checking module for the ARM "PSCI" (not to be confused with
     "SCPI" mentioned above) firmware interface.

   - A new driver for the Tegra GMI memory interface

   - Support for the Tegra firmware interfaces with their power
     management controllers

  As usual, the updates for the reset controller framework are merged
  here, as they tend to touch multiple SoCs as well, including a new
  driver for the Oxford (now Broadcom) OX820 chip and the Tegra bpmp
  interface.

  The existing drivers for Atmel, Qualcomm, NVIDIA, TI Davinci, and
  Rockchips SoCs see some further updates"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (76 commits)
  misc: sram: remove useless #ifdef
  drivers: psci: Allow PSCI node to be disabled
  drivers: psci: PSCI checker module
  soc: renesas: Identify SoC and register with the SoC bus
  firmware: qcom: scm: Return PTR_ERR when devm_clk_get fails
  firmware: qcom: scm: Remove core, iface and bus clocks dependency
  dt-bindings: firmware: scm: Add MSM8996 DT bindings
  memory: da8xx-ddrctl: drop the call to of_flat_dt_get_machine_name()
  bus: da8xx-mstpri: drop the call to of_flat_dt_get_machine_name()
  ARM: shmobile: Document DT bindings for Product Register
  soc: renesas: rcar-sysc: add R8A7745 support
  reset: Add Tegra BPMP reset driver
  dt-bindings: firmware: Allow child nodes inside the Tegra BPMP
  dt-bindings: Add power domains to Tegra BPMP firmware
  firmware: tegra: Add BPMP support
  firmware: tegra: Add IVC library
  dt-bindings: firmware: Add bindings for Tegra BPMP
  mailbox: tegra-hsp: Use after free in tegra_hsp_remove_doorbells()
  mailbox: Add Tegra HSP driver
  firmware: arm_scpi: add support for pre-v1.0 SCPI compatible
  ...
System Control and Power Interface (SCPI) Message Protocol
(in addition to the standard binding in [0])
----------------------------------------------------------
Required properties
- compatible : should be "amlogic,meson-gxbb-scpi"
AMLOGIC SRAM and Shared Memory for SCPI
------------------------------------
Required properties:
- compatible : should be "amlogic,meson-gxbb-sram"
Each sub-node represents the reserved area for SCPI.
Required sub-node properties:
- compatible : should be "amlogic,meson-gxbb-scp-shmem" for SRAM based shared
memory on Amlogic GXBB SoC.
[0] Documentation/devicetree/bindings/arm/arm,scpi.txt
......@@ -7,7 +7,10 @@ by Linux to initiate various system control and power operations.
Required properties:
- compatible : should be "arm,scpi"
- compatible : should be
* "arm,scpi" : For implementations complying to SCPI v1.0 or above
* "arm,scpi-pre-1.0" : For implementations complying to all
unversioned releases prior to SCPI v1.0
- mboxes: List of phandle and mailbox channel specifiers
All the channels reserved by remote SCP firmware for use by
SCPI message protocol should be specified in any order
......@@ -59,18 +62,14 @@ SRAM and Shared Memory for SCPI
A small area of SRAM is reserved for SCPI communication between application
processors and SCP.
Required properties:
- compatible : should be "arm,juno-sram-ns" for Non-secure SRAM on Juno
The rest of the properties should follow the generic mmio-sram description
found in ../../sram/sram.txt
The properties should follow the generic mmio-sram description found in [3]
Each sub-node represents the reserved area for SCPI.
Required sub-node properties:
- reg : The base offset and size of the reserved area with the SRAM
- compatible : should be "arm,juno-scp-shmem" for Non-secure SRAM based
shared memory on Juno platforms
- compatible : should be "arm,scp-shmem" for Non-secure SRAM based
shared memory
Sensor bindings for the sensors based on SCPI Message Protocol
--------------------------------------------------------------
......@@ -81,11 +80,9 @@ Required properties:
- #thermal-sensor-cells: should be set to 1. This property follows the
thermal device tree bindings[2].
Valid cell values are raw identifiers (Sensor
ID) as used by the firmware. Refer to
platform documentation for your
implementation for the IDs to use. For Juno
R0 and Juno R1 refer to [3].
Valid cell values are raw identifiers (Sensor ID)
as used by the firmware. Refer to platform details
for your implementation for the IDs to use.
Power domain bindings for the power domains based on SCPI Message Protocol
------------------------------------------------------------
......@@ -112,7 +109,7 @@ Required properties:
[0] http://infocenter.arm.com/help/topic/com.arm.doc.dui0922b/index.html
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
[2] Documentation/devicetree/bindings/thermal/thermal.txt
[3] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0922b/apas03s22.html
[3] Documentation/devicetree/bindings/sram/sram.txt
[4] Documentation/devicetree/bindings/power/power_domain.txt
Example:
......
......@@ -225,3 +225,19 @@ required properties:
compatible = "atmel,sama5d3-sfr", "syscon";
reg = <0xf0038000 0x60>;
};
Security Module (SECUMOD)
The Security Module macrocell provides all necessary secure functions to avoid
voltage, temperature, frequency and mechanical attacks on the chip. It also
embeds secure memories that can be scrambled
required properties:
- compatible: Should be "atmel,<chip>-secumod", "syscon".
<chip> can be "sama5d2".
- reg: Should contain registers location and length
secumod@fc040000 {
compatible = "atmel,sama5d2-secumod", "syscon";
reg = <0xfc040000 0x100>;
};
System Control and Power Interface (SCPI) Message Protocol
(in addition to the standard binding in [0])
Juno SRAM and Shared Memory for SCPI
------------------------------------
Required properties:
- compatible : should be "arm,juno-sram-ns" for Non-secure SRAM
Each sub-node represents the reserved area for SCPI.
Required sub-node properties:
- reg : The base offset and size of the reserved area with the SRAM
- compatible : should be "arm,juno-scp-shmem" for Non-secure SRAM based
shared memory on Juno platforms
Sensor bindings for the sensors based on SCPI Message Protocol
--------------------------------------------------------------
Required properties:
- compatible : should be "arm,scpi-sensors".
- #thermal-sensor-cells: should be set to 1.
For Juno R0 and Juno R1 refer to [1] for the
sensor identifiers
[0] Documentation/devicetree/bindings/arm/arm,scpi.txt
[1] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0922b/apas03s22.html
Texas Instruments System Control Interface (TI-SCI) Message Protocol
--------------------------------------------------------------------
Texas Instrument's processors including those belonging to Keystone generation
of processors have separate hardware entity which is now responsible for the
management of the System on Chip (SoC) system. These include various system
level functions as well.
An example of such an SoC is K2G, which contains the system control hardware
block called Power Management Micro Controller (PMMC). This hardware block is
initialized early into boot process and provides services to Operating Systems
on multiple processors including ones running Linux.
See http://processors.wiki.ti.com/index.php/TISCI for protocol definition.
TI-SCI controller Device Node:
=============================
The TI-SCI node describes the Texas Instrument's System Controller entity node.
This parent node may optionally have additional children nodes which describe
specific functionality such as clocks, power domain, reset or additional
functionality as may be required for the SoC. This hierarchy also describes the
relationship between the TI-SCI parent node to the child node.
Required properties:
-------------------
- compatible: should be "ti,k2g-sci"
- mbox-names:
"rx" - Mailbox corresponding to receive path
"tx" - Mailbox corresponding to transmit path
- mboxes: Mailboxes corresponding to the mbox-names. Each value of the mboxes
property should contain a phandle to the mailbox controller device
node and an args specifier that will be the phandle to the intended
sub-mailbox child node to be used for communication.
See Documentation/devicetree/bindings/mailbox/mailbox.txt for more details
about the generic mailbox controller and client driver bindings. Also see
Documentation/devicetree/bindings/mailbox/ti,message-manager.txt for typical
controller that is used to communicate with this System controllers.
Optional Properties:
-------------------
- reg-names:
debug_messages - Map the Debug message region
- reg: register space corresponding to the debug_messages
- ti,system-reboot-controller: If system reboot can be triggered by SoC reboot
Example (K2G):
-------------
pmmc: pmmc {
compatible = "ti,k2g-sci";
mbox-names = "rx", "tx";
mboxes= <&msgmgr &msgmgr_proxy_pmmc_rx>,
<&msgmgr &msgmgr_proxy_pmmc_tx>;
reg-names = "debug_messages";
reg = <0x02921800 0x800>;
};
TI-SCI Client Device Node:
=========================
Client nodes are maintained as children of the relevant TI-SCI device node.
Example (K2G):
-------------
pmmc: pmmc {
compatible = "ti,k2g-sci";
...
my_clk_node: clk_node {
...
...
};
my_pd_node: pd_node {
...
...
};
};
......@@ -85,3 +85,21 @@ Boards:
compatible = "renesas,sk-rzg1m", "renesas,r8a7743"
- Wheat
compatible = "renesas,wheat", "renesas,r8a7792"
Most Renesas ARM SoCs have a Product Register that allows to retrieve SoC
product and revision information. If present, a device node for this register
should be added.
Required properties:
- compatible: Must be "renesas,prr".
- reg: Base address and length of the register block.
Examples
--------
prr: chipid@ff000044 {
compatible = "renesas,prr";
reg = <0 0xff000044 0 4>;
};
Device tree bindings for NVIDIA Tegra Generic Memory Interface bus
The Generic Memory Interface bus enables memory transfers between internal and
external memory. Can be used to attach various high speed devices such as
synchronous/asynchronous NOR, FPGA, UARTS and more.
The actual devices are instantiated from the child nodes of a GMI node.
Required properties:
- compatible : Should contain one of the following:
For Tegra20 must contain "nvidia,tegra20-gmi".
For Tegra30 must contain "nvidia,tegra30-gmi".
- reg: Should contain GMI controller registers location and length.
- clocks: Must contain an entry for each entry in clock-names.
- clock-names: Must include the following entries: "gmi"
- resets : Must contain an entry for each entry in reset-names.
- reset-names : Must include the following entries: "gmi"
- #address-cells: The number of cells used to represent physical base
addresses in the GMI address space. Should be 2.
- #size-cells: The number of cells used to represent the size of an address
range in the GMI address space. Should be 1.
- ranges: Must be set up to reflect the memory layout with three integer values
for each chip-select line in use (only one entry is supported, see below
comments):
<cs-number> <offset> <physical address of mapping> <size>
Note that the GMI controller does not have any internal chip-select address
decoding, because of that chip-selects either need to be managed via software
or by employing external chip-select decoding logic.
If external chip-select logic is used to support multiple devices it is assumed
that the devices use the same timing and so are probably the same type. It also
assumes that they can fit in the 256MB address range. In this case only one
child device is supported which represents the active chip-select line, see
examples for more insight.
The chip-select number is decoded from the child nodes second address cell of
'ranges' property, if 'ranges' property is not present or empty chip-select will
then be decoded from the first cell of the 'reg' property.
Optional child cs node properties:
- nvidia,snor-data-width-32bit: Use 32bit data-bus, default is 16bit.
- nvidia,snor-mux-mode: Enable address/data MUX mode.
- nvidia,snor-rdy-active-before-data: Assert RDY signal one cycle before data.
If omitted it will be asserted with data.
- nvidia,snor-rdy-active-high: RDY signal is active high
- nvidia,snor-adv-active-high: ADV signal is active high
- nvidia,snor-oe-active-high: WE/OE signal is active high
- nvidia,snor-cs-active-high: CS signal is active high
Note that there is some special handling for the timing values.
From Tegra TRM:
Programming 0 means 1 clock cycle: actual cycle = programmed cycle + 1
- nvidia,snor-muxed-width: Number of cycles MUX address/data asserted on the
bus. Valid values are 0-15, default is 1
- nvidia,snor-hold-width: Number of cycles CE stays asserted after the
de-assertion of WR_N (in case of SLAVE/MASTER Request) or OE_N
(in case of MASTER Request). Valid values are 0-15, default is 1
- nvidia,snor-adv-width: Number of cycles during which ADV stays asserted.
Valid values are 0-15, default is 1.
- nvidia,snor-ce-width: Number of cycles before CE is asserted.
Valid values are 0-15, default is 4
- nvidia,snor-we-width: Number of cycles during which WE stays asserted.
Valid values are 0-15, default is 1
- nvidia,snor-oe-width: Number of cycles during which OE stays asserted.
Valid values are 0-255, default is 1
- nvidia,snor-wait-width: Number of cycles before READY is asserted.
Valid values are 0-255, default is 3
Example with two SJA1000 CAN controllers connected to the GMI bus. We wrap the
controllers with a simple-bus node since they are all connected to the same
chip-select (CS4), in this example external address decoding is provided:
gmi@70090000 {
compatible = "nvidia,tegra20-gmi";
reg = <0x70009000 0x1000>;
#address-cells = <2>;
#size-cells = <1>;
clocks = <&tegra_car TEGRA20_CLK_NOR>;
clock-names = "gmi";
resets = <&tegra_car 42>;
reset-names = "gmi";
ranges = <4 0 0xd0000000 0xfffffff>;
status = "okay";
bus@4,0 {
compatible = "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
ranges = <0 4 0 0x40100>;
nvidia,snor-mux-mode;
nvidia,snor-adv-active-high;
can@0 {
reg = <0 0x100>;
...
};
can@40000 {
reg = <0x40000 0x100>;
...
};
};
};
Example with one SJA1000 CAN controller connected to the GMI bus
on CS4:
gmi@70090000 {
compatible = "nvidia,tegra20-gmi";
reg = <0x70009000 0x1000>;
#address-cells = <2>;
#size-cells = <1>;
clocks = <&tegra_car TEGRA20_CLK_NOR>;
clock-names = "gmi";
resets = <&tegra_car 42>;
reset-names = "gmi";
ranges = <4 0 0xd0000000 0xfffffff>;
status = "okay";
can@4,0 {
reg = <4 0 0x100>;
nvidia,snor-mux-mode;
nvidia,snor-adv-active-high;
...
};
};
* Device tree bindings for Texas Instruments da8xx master peripheral
priority driver
DA8XX SoCs feature a set of registers allowing to change the priority of all
peripherals classified as masters.
Documentation:
OMAP-L138 (DA850) - http://www.ti.com/lit/ug/spruh82c/spruh82c.pdf
Required properties:
- compatible: "ti,da850-mstpri" - for da850 based boards
- reg: offset and length of the mstpri registers
Example for da850-lcdk is shown below.
mstpri {
compatible = "ti,da850-mstpri";
reg = <0x14110 0x0c>;
};
NVIDIA Tegra Boot and Power Management Processor (BPMP)
The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.
Required properties:
- name : Should be bpmp
- compatible
Array of strings
One of:
- "nvidia,tegra186-bpmp"
- mboxes : The phandle of mailbox controller and the mailbox specifier.
- shmem : List of the phandle of the TX and RX shared memory area that
the IPC between CPU and BPMP is based on.
- #clock-cells : Should be 1.
- #power-domain-cells : Should be 1.
- #reset-cells : Should be 1.
This node is a mailbox consumer. See the following files for details of
the mailbox subsystem, and the specifiers implemented by the relevant
provider(s):
- .../mailbox/mailbox.txt
- .../mailbox/nvidia,tegra186-hsp.txt
This node is a clock, power domain, and reset provider. See the following
files for general documentation of those features, and the specifiers
implemented by this node:
- .../clock/clock-bindings.txt
- <dt-bindings/clock/tegra186-clock.h>
- ../power/power_domain.txt
- <dt-bindings/power/tegra186-powergate.h>
- .../reset/reset.txt
- <dt-bindings/reset/tegra186-reset.h>
The BPMP implements some services which must be represented by separate nodes.
For example, it can provide access to certain I2C controllers, and the I2C
bindings represent each I2C controller as a device tree node. Such nodes should
be nested directly inside the main BPMP node.
Software can determine whether a child node of the BPMP node represents a device
by checking for a compatible property. Any node with a compatible property
represents a device that can be instantiated. Nodes without a compatible
property may be used to provide configuration information regarding the BPMP
itself, although no such configuration nodes are currently defined by this
binding.
The BPMP firmware defines no single global name-/numbering-space for such
services. Put another way, the numbering scheme for I2C buses is distinct from
the numbering scheme for any other service the BPMP may provide (e.g. a future
hypothetical SPI bus service). As such, child device nodes will have no reg
property, and the BPMP node will have no #address-cells or #size-cells property.
The shared memory bindings for BPMP
-----------------------------------
The shared memory area for the IPC TX and RX between CPU and BPMP are
predefined and work on top of sysram, which is an SRAM inside the chip.
See ".../sram/sram.txt" for the bindings.
Example:
hsp_top0: hsp@03c00000 {
...
#mbox-cells = <2>;
};
sysram@30000000 {
compatible = "nvidia,tegra186-sysram", "mmio-sram";
reg = <0x0 0x30000000 0x0 0x50000>;
#address-cells = <2>;
#size-cells = <2>;
ranges = <0 0x0 0x0 0x30000000 0x0 0x50000>;
cpu_bpmp_tx: shmem@4e000 {
compatible = "nvidia,tegra186-bpmp-shmem";
reg = <0x0 0x4e000 0x0 0x1000>;
label = "cpu-bpmp-tx";
pool;
};
cpu_bpmp_rx: shmem@4f000 {
compatible = "nvidia,tegra186-bpmp-shmem";
reg = <0x0 0x4f000 0x0 0x1000>;
label = "cpu-bpmp-rx";
pool;
};
};
bpmp {
compatible = "nvidia,tegra186-bpmp";
mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_BPMP>;
shmem = <&cpu_bpmp_tx &cpu_bpmp_rx>;
#clock-cells = <1>;
#power-domain-cells = <1>;
#reset-cells = <1>;
i2c {
compatible = "...";
...
};
};
......@@ -10,8 +10,10 @@ Required properties:
* "qcom,scm-apq8064" for APQ8064 platforms
* "qcom,scm-msm8660" for MSM8660 platforms
* "qcom,scm-msm8690" for MSM8690 platforms
* "qcom,scm-msm8996" for MSM8996 platforms
* "qcom,scm" for later processors (MSM8916, APQ8084, MSM8974, etc)
- clocks: One to three clocks may be required based on compatible.
* No clock required for "qcom,scm-msm8996"
* Only core clock required for "qcom,scm-apq8064", "qcom,scm-msm8660", and "qcom,scm-msm8960"
* Core, iface, and bus clocks required for "qcom,scm"
- clock-names: Must contain "core" for the core clock, "iface" for the interface
......
NVIDIA Tegra Hardware Synchronization Primitives (HSP)
The HSP modules are used for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication (IPC)
protocols can use hardware synchronization primitives, when operating between
two processors not in an SMP relationship.
The features that HSP supported are shared mailboxes, shared semaphores,
arbitrated semaphores and doorbells.
Required properties:
- name : Should be hsp
- compatible
Array of strings.
one of:
- "nvidia,tegra186-hsp"
- reg : Offset and length of the register set for the device.
- interrupt-names
Array of strings.
Contains a list of names for the interrupts described by the interrupt
property. May contain the following entries, in any order:
- "doorbell"
Users of this binding MUST look up entries in the interrupt property
by name, using this interrupt-names property to do so.
- interrupts
Array of interrupt specifiers.
Must contain one entry per entry in the interrupt-names property,
in a matching order.
- #mbox-cells : Should be 2.
The mbox specifier of the "mboxes" property in the client node should
contain two data. The first one should be the HSP type and the second
one should be the ID that the client is going to use. Those information
can be found in the following file.
- <dt-bindings/mailbox/tegra186-hsp.h>.
Example:
hsp_top0: hsp@3c00000 {
compatible = "nvidia,tegra186-hsp";
reg = <0x0 0x03c00000 0x0 0xa0000>;
interrupts = <GIC_SPI 176 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "doorbell";
#mbox-cells = <2>;
};
client {
...
mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_XXX>;
};
* Device tree bindings for Texas Instruments da8xx DDR2/mDDR memory controller
The DDR2/mDDR memory controller present on Texas Instruments da8xx SoCs features
a set of registers which allow to tweak the controller's behavior.
Documentation:
OMAP-L138 (DA850) - http://www.ti.com/lit/ug/spruh82c/spruh82c.pdf
Required properties:
- compatible: "ti,da850-ddr-controller" - for da850 SoC based boards
- reg: a tuple containing the base address of the memory
controller and the size of the memory area to map
Example for da850 shown below.
ddrctl {
compatible = "ti,da850-ddr-controller";
reg = <0xb0000000 0xe8>;
};
DT bindings for the Renesas R-Car System Controller
DT bindings for the Renesas R-Car (RZ/G) System Controller
== System Controller Node ==
The R-Car System Controller provides power management for the CPU cores and
various coprocessors.
The R-Car (RZ/G) System Controller provides power management for the CPU cores
and various coprocessors.
Required properties:
- compatible: Must contain exactly one of the following:
- "renesas,r8a7743-sysc" (RZ/G1M)
- "renesas,r8a7745-sysc" (RZ/G1E)
- "renesas,r8a7779-sysc" (R-Car H1)
- "renesas,r8a7790-sysc" (R-Car H2)
- "renesas,r8a7791-sysc" (R-Car M2-W)
......
......@@ -5,45 +5,19 @@ Please also refer to reset.txt in this directory for common reset
controller binding usage.
Required properties:
- compatible: Should be "oxsemi,ox810se-reset"
- compatible: For OX810SE, should be "oxsemi,ox810se-reset"
For OX820, should be "oxsemi,ox820-reset"
- #reset-cells: 1, see below
Parent node should have the following properties :
- compatible: Should be "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd"
- compatible: For OX810SE, should be :
"oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd"
For OX820, should be :
"oxsemi,ox820-sys-ctrl", "syscon", "simple-mfd"
For OX810SE, the indices are :
- 0 : ARM
- 1 : COPRO
- 2 : Reserved
- 3 : Reserved
- 4 : USBHS
- 5 : USBHSPHY
- 6 : MAC
- 7 : PCI
- 8 : DMA
- 9 : DPE
- 10 : DDR
- 11 : SATA
- 12 : SATA_LINK
- 13 : SATA_PHY
- 14 : Reserved
- 15 : NAND
- 16 : GPIO
- 17 : UART1
- 18 : UART2
- 19 : MISC
- 20 : I2S
- 21 : AHB_MON
- 22 : UART3
- 23 : UART4
- 24 : SGDMA
- 25 : Reserved
- 26 : Reserved
- 27 : Reserved
- 28 : Reserved
- 29 : Reserved
- 30 : Reserved
- 31 : BUS
Reset indices are in dt-bindings include files :
- For OX810SE: include/dt-bindings/reset/oxsemi,ox810se.h
- For OX820: include/dt-bindings/reset/oxsemi,ox820.h
example:
......
......@@ -4,7 +4,7 @@ Simple IO memory regions to be managed by the genalloc API.
Required properties:
- compatible : mmio-sram
- compatible : mmio-sram or atmel,sama5d2-securam
- reg : SRAM iomem address range
......
......@@ -1627,6 +1627,7 @@ F: arch/arm/mach-qcom/
F: arch/arm64/boot/dts/qcom/*
F: drivers/i2c/busses/i2c-qup.c
F: drivers/clk/qcom/
F: drivers/pinctrl/qcom/
F: drivers/soc/qcom/
F: drivers/spi/spi-qup.c
F: drivers/tty/serial/msm_serial.h
......@@ -12088,6 +12089,16 @@ S: Maintained
F: arch/xtensa/
F: drivers/irqchip/irq-xtensa-*
Texas Instruments' System Control Interface (TISCI) Protocol Driver
M: Nishanth Menon <nm@ti.com>
M: Tero Kristo <t-kristo@ti.com>
M: Santosh Shilimkar <ssantosh@kernel.org>
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/arm/keystone/ti,sci.txt
F: drivers/firmware/ti_sci*
F: include/linux/soc/ti/ti_sci_protocol.h
THANKO'S RAREMONO AM/FM/SW RADIO RECEIVER USB DRIVER
M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
......
......@@ -41,6 +41,7 @@ menuconfig ARCH_RENESAS
select HAVE_ARM_TWD if SMP
select NO_IOPORT_MAP
select PINCTRL
select SOC_BUS
select ZONE_DMA if ARM_LPAE
if ARCH_RENESAS
......
......@@ -28,7 +28,6 @@ if ARCH_STI
config SOC_STIH415
bool "STiH415 STMicroelectronics Consumer Electronics family"
default y
select STIH415_RESET
help
This enables support for STMicroelectronics Digital Consumer
Electronics family StiH415 parts, primarily targeted at set-top-box
......@@ -38,7 +37,6 @@ config SOC_STIH415
config SOC_STIH416
bool "STiH416 STMicroelectronics Consumer Electronics family"
default y
select STIH416_RESET
help
This enables support for STMicroelectronics Digital Consumer
Electronics family StiH416 parts, primarily targeted at set-top-box
......
......@@ -144,6 +144,7 @@ config ARCH_RENESAS
select PM
select PM_GENERIC_DOMAINS
select RENESAS_IRQC
select SOC_BUS
help
This enables support for the ARMv8 based Renesas SoCs.
......
......@@ -150,6 +150,13 @@ config TEGRA_ACONNECT
Driver for the Tegra ACONNECT bus which is used to interface with
the devices inside the Audio Processing Engine (APE) for Tegra210.
config TEGRA_GMI
tristate "Tegra Generic Memory Interface bus driver"
depends on ARCH_TEGRA
help
Driver for the Tegra Generic Memory Interface bus which can be used
to attach devices such as NOR, UART, FPGA and more.
config UNIPHIER_SYSTEM_BUS
tristate "UniPhier System Bus driver"
depends on ARCH_UNIPHIER && OF
......@@ -167,4 +174,13 @@ config VEXPRESS_CONFIG
help
Platform configuration infrastructure for the ARM Ltd.
Versatile Express.
config DA8XX_MSTPRI
bool "TI da8xx master peripheral priority driver"
depends on ARCH_DAVINCI_DA8XX
help
Driver for Texas Instruments da8xx master peripheral priority
configuration. Allows to adjust the priorities of all master
peripherals.
endmenu
......@@ -19,5 +19,8 @@ obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o
obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o
obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o
obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o
obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o
obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
/*
* TI da8xx master peripheral priority driver
*
* Copyright (C) 2016 BayLibre SAS
*
* Author:
* Bartosz Golaszewski <bgolaszewski@baylibre.com.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/io.h>
#include <linux/regmap.h>
/*
* REVISIT: Linux doesn't have a good framework for the kind of performance
* knobs this driver controls. We can't use device tree properties as it deals
* with hardware configuration rather than description. We also don't want to
* commit to maintaining some random sysfs attributes.
*
* For now we just hardcode the register values for the boards that need
* some changes (as is the case for the LCD controller on da850-lcdk - the
* first board we support here). When linux gets an appropriate framework,
* we'll easily convert the driver to it.
*/
#define DA8XX_MSTPRI0_OFFSET 0
#define DA8XX_MSTPRI1_OFFSET 4
#define DA8XX_MSTPRI2_OFFSET 8
enum {
DA8XX_MSTPRI_ARM_I = 0,
DA8XX_MSTPRI_ARM_D,
DA8XX_MSTPRI_UPP,
DA8XX_MSTPRI_SATA,
DA8XX_MSTPRI_PRU0,
DA8XX_MSTPRI_PRU1,
DA8XX_MSTPRI_EDMA30TC0,
DA8XX_MSTPRI_EDMA30TC1,
DA8XX_MSTPRI_EDMA31TC0,
DA8XX_MSTPRI_VPIF_DMA_0,
DA8XX_MSTPRI_VPIF_DMA_1,
DA8XX_MSTPRI_EMAC,
DA8XX_MSTPRI_USB0CFG,
DA8XX_MSTPRI_USB0CDMA,
DA8XX_MSTPRI_UHPI,
DA8XX_MSTPRI_USB1,
DA8XX_MSTPRI_LCDC,
};
struct da8xx_mstpri_descr {
int reg;
int shift;
int mask;
};
static const struct da8xx_mstpri_descr da8xx_mstpri_priority_list[] = {
[DA8XX_MSTPRI_ARM_I] = {
.reg = DA8XX_MSTPRI0_OFFSET,
.shift = 0,
.mask = 0x0000000f,
},
[DA8XX_MSTPRI_ARM_D] = {
.reg = DA8XX_MSTPRI0_OFFSET,
.shift = 4,
.mask = 0x000000f0,
},
[DA8XX_MSTPRI_UPP] = {
.reg = DA8XX_MSTPRI0_OFFSET,
.shift = 16,
.mask = 0x000f0000,
},
[DA8XX_MSTPRI_SATA] = {
.reg = DA8XX_MSTPRI0_OFFSET,
.shift = 20,
.mask = 0x00f00000,
},
[DA8XX_MSTPRI_PRU0] = {
.reg = DA8XX_MSTPRI1_OFFSET,
.shift = 0,
.mask = 0x0000000f,
},
[DA8XX_MSTPRI_PRU1] = {
.reg = DA8XX_MSTPRI1_OFFSET,
.shift = 4,
.mask = 0x000000f0,
},
[DA8XX_MSTPRI_EDMA30TC0] = {
.reg = DA8XX_MSTPRI1_OFFSET,
.shift = 8,
.mask = 0x00000f00,
},
[DA8XX_MSTPRI_EDMA30TC1] = {
.reg = DA8XX_MSTPRI1_OFFSET,
.shift = 12,
.mask = 0x0000f000,
},
[DA8XX_MSTPRI_EDMA31TC0] = {
.reg = DA8XX_MSTPRI1_OFFSET,
.shift = 16,
.mask = 0x000f0000,
},
[DA8XX_MSTPRI_VPIF_DMA_0] = {
.reg = DA8XX_MSTPRI1_OFFSET,
.shift = 24,
.mask = 0x0f000000,
},
[DA8XX_MSTPRI_VPIF_DMA_1] = {
.reg = DA8XX_MSTPRI1_OFFSET,
.shift = 28,
.mask = 0xf0000000,
},
[DA8XX_MSTPRI_EMAC] = {
.reg = DA8XX_MSTPRI2_OFFSET,
.shift = 0,
.mask = 0x0000000f,
},
[DA8XX_MSTPRI_USB0CFG] = {
.reg = DA8XX_MSTPRI2_OFFSET,
.shift = 8,
.mask = 0x00000f00,
},
[DA8XX_MSTPRI_USB0CDMA] = {
.reg = DA8XX_MSTPRI2_OFFSET,
.shift = 12,
.mask = 0x0000f000,
},
[DA8XX_MSTPRI_UHPI] = {
.reg = DA8XX_MSTPRI2_OFFSET,
.shift = 20,
.mask = 0x00f00000,
},
[DA8XX_MSTPRI_USB1] = {
.reg = DA8XX_MSTPRI2_OFFSET,
.shift = 24,
.mask = 0x0f000000,
},
[DA8XX_MSTPRI_LCDC] = {
.reg = DA8XX_MSTPRI2_OFFSET,
.shift = 28,
.mask = 0xf0000000,
},
};
struct da8xx_mstpri_priority {
int which;
u32 val;
};
struct da8xx_mstpri_board_priorities {
const char *board;
const struct da8xx_mstpri_priority *priorities;
size_t numprio;
};
/*
* Default memory settings of da850 do not meet the throughput/latency
* requirements of tilcdc. This results in the image displayed being
* incorrect and the following warning being displayed by the LCDC
* drm driver:
*
* tilcdc da8xx_lcdc.0: tilcdc_crtc_irq(0x00000020): FIFO underfow
*/
static const struct da8xx_mstpri_priority da850_lcdk_priorities[] = {
{
.which = DA8XX_MSTPRI_LCDC,
.val = 0,
},
{
.which = DA8XX_MSTPRI_EDMA30TC1,
.val = 0,
},
{
.which = DA8XX_MSTPRI_EDMA30TC0,
.val = 1,
},
};
static const struct da8xx_mstpri_board_priorities da8xx_mstpri_board_confs[] = {
{
.board = "ti,da850-lcdk",
.priorities = da850_lcdk_priorities,
.numprio = ARRAY_SIZE(da850_lcdk_priorities),
},
};
static const struct da8xx_mstpri_board_priorities *
da8xx_mstpri_get_board_prio(void)
{
const struct da8xx_mstpri_board_priorities *board_prio;
int i;
for (i = 0; i < ARRAY_SIZE(da8xx_mstpri_board_confs); i++) {
board_prio = &da8xx_mstpri_board_confs[i];
if (of_machine_is_compatible(board_prio->board))
return board_prio;
}
return NULL;
}
static int da8xx_mstpri_probe(struct platform_device *pdev)
{
const struct da8xx_mstpri_board_priorities *prio_list;
const struct da8xx_mstpri_descr *prio_descr;
const struct da8xx_mstpri_priority *prio;
struct device *dev = &pdev->dev;
struct resource *res;
void __iomem *mstpri;
u32 reg;
int i;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mstpri = devm_ioremap_resource(dev, res);
if (IS_ERR(mstpri)) {
dev_err(dev, "unable to map MSTPRI registers\n");
return PTR_ERR(mstpri);
}
prio_list = da8xx_mstpri_get_board_prio();
if (!prio_list) {
dev_err(dev, "no master priorities defined for this board\n");
return -EINVAL;
}
for (i = 0; i < prio_list->numprio; i++) {
prio = &prio_list->priorities[i];
prio_descr = &da8xx_mstpri_priority_list[prio->which];
if (prio_descr->reg + sizeof(u32) > resource_size(res)) {
dev_warn(dev, "register offset out of range\n");
continue;
}
reg = readl(mstpri + prio_descr->reg);
reg &= ~prio_descr->mask;
reg |= prio->val << prio_descr->shift;
writel(reg, mstpri + prio_descr->reg);
}
return 0;
}
static const struct of_device_id da8xx_mstpri_of_match[] = {
{ .compatible = "ti,da850-mstpri", },
{ },
};
static struct platform_driver da8xx_mstpri_driver = {
.probe = da8xx_mstpri_probe,
.driver = {
.name = "da8xx-mstpri",
.of_match_table = da8xx_mstpri_of_match,
},
};
module_platform_driver(da8xx_mstpri_driver);
MODULE_AUTHOR("Bartosz Golaszewski <bgolaszewski@baylibre.com>");
MODULE_DESCRIPTION("TI da8xx master peripheral priority driver");
MODULE_LICENSE("GPL v2");
/*
* Driver for NVIDIA Generic Memory Interface
*
* Copyright (C) 2016 Host Mobility AB. All rights reserved.
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/reset.h>
#define TEGRA_GMI_CONFIG 0x00
#define TEGRA_GMI_CONFIG_GO BIT(31)
#define TEGRA_GMI_BUS_WIDTH_32BIT BIT(30)
#define TEGRA_GMI_MUX_MODE BIT(28)
#define TEGRA_GMI_RDY_BEFORE_DATA BIT(24)
#define TEGRA_GMI_RDY_ACTIVE_HIGH BIT(23)
#define TEGRA_GMI_ADV_ACTIVE_HIGH BIT(22)
#define TEGRA_GMI_OE_ACTIVE_HIGH BIT(21)
#define TEGRA_GMI_CS_ACTIVE_HIGH BIT(20)
#define TEGRA_GMI_CS_SELECT(x) ((x & 0x7) << 4)
#define TEGRA_GMI_TIMING0 0x10
#define TEGRA_GMI_MUXED_WIDTH(x) ((x & 0xf) << 12)
#define TEGRA_GMI_HOLD_WIDTH(x) ((x & 0xf) << 8)
#define TEGRA_GMI_ADV_WIDTH(x) ((x & 0xf) << 4)
#define TEGRA_GMI_CE_WIDTH(x) (x & 0xf)
#define TEGRA_GMI_TIMING1 0x14
#define TEGRA_GMI_WE_WIDTH(x) ((x & 0xff) << 16)
#define TEGRA_GMI_OE_WIDTH(x) ((x & 0xff) << 8)
#define TEGRA_GMI_WAIT_WIDTH(x) (x & 0xff)
#define TEGRA_GMI_MAX_CHIP_SELECT 8
struct tegra_gmi {
struct device *dev;
void __iomem *base;
struct clk *clk;
struct reset_control *rst;
u32 snor_config;
u32 snor_timing0;
u32 snor_timing1;
};
static int tegra_gmi_enable(struct tegra_gmi *gmi)
{
int err;
err = clk_prepare_enable(gmi->clk);
if (err < 0) {
dev_err(gmi->dev, "failed to enable clock: %d\n", err);
return err;
}
reset_control_assert(gmi->rst);
usleep_range(2000, 4000);
reset_control_deassert(gmi->rst);
writel(gmi->snor_timing0, gmi->base + TEGRA_GMI_TIMING0);
writel(gmi->snor_timing1, gmi->base + TEGRA_GMI_TIMING1);
gmi->snor_config |= TEGRA_GMI_CONFIG_GO;
writel(gmi->snor_config, gmi->base + TEGRA_GMI_CONFIG);
return 0;
}
static void tegra_gmi_disable(struct tegra_gmi *gmi)
{
u32 config;
/* stop GMI operation */
config = readl(gmi->base + TEGRA_GMI_CONFIG);
config &= ~TEGRA_GMI_CONFIG_GO;
writel(config, gmi->base + TEGRA_GMI_CONFIG);
reset_control_assert(gmi->rst);
clk_disable_unprepare(gmi->clk);
}
static int tegra_gmi_parse_dt(struct tegra_gmi *gmi)
{
struct device_node *child;
u32 property, ranges[4];
int err;
child = of_get_next_available_child(gmi->dev->of_node, NULL);
if (!child) {
dev_err(gmi->dev, "no child nodes found\n");
return -ENODEV;
}
/*
* We currently only support one child device due to lack of
* chip-select address decoding. Which means that we only have one
* chip-select line from the GMI controller.
*/
if (of_get_child_count(gmi->dev->of_node) > 1)
dev_warn(gmi->dev, "only one child device is supported.");
if (of_property_read_bool(child, "nvidia,snor-data-width-32bit"))
gmi->snor_config |= TEGRA_GMI_BUS_WIDTH_32BIT;
if (of_property_read_bool(child, "nvidia,snor-mux-mode"))
gmi->snor_config |= TEGRA_GMI_MUX_MODE;
if (of_property_read_bool(child, "nvidia,snor-rdy-active-before-data"))
gmi->snor_config |= TEGRA_GMI_RDY_BEFORE_DATA;
if (of_property_read_bool(child, "nvidia,snor-rdy-active-high"))
gmi->snor_config |= TEGRA_GMI_RDY_ACTIVE_HIGH;
if (of_property_read_bool(child, "nvidia,snor-adv-active-high"))
gmi->snor_config |= TEGRA_GMI_ADV_ACTIVE_HIGH;
if (of_property_read_bool(child, "nvidia,snor-oe-active-high"))
gmi->snor_config |= TEGRA_GMI_OE_ACTIVE_HIGH;
if (of_property_read_bool(child, "nvidia,snor-cs-active-high"))
gmi->snor_config |= TEGRA_GMI_CS_ACTIVE_HIGH;
/* Decode the CS# */
err = of_property_read_u32_array(child, "ranges", ranges, 4);
if (err < 0) {
/* Invalid binding */
if (err == -EOVERFLOW) {
dev_err(gmi->dev,
"failed to decode CS: invalid ranges length\n");
goto error_cs;
}
/*
* If we reach here it means that the child node has an empty
* ranges or it does not exist at all. Attempt to decode the
* CS# from the reg property instead.
*/
err = of_property_read_u32(child, "reg", &property);
if (err < 0) {
dev_err(gmi->dev,
"failed to decode CS: no reg property found\n");
goto error_cs;
}
} else {
property = ranges[1];
}
/* Valid chip selects are CS0-CS7 */
if (property >= TEGRA_GMI_MAX_CHIP_SELECT) {
dev_err(gmi->dev, "invalid chip select: %d", property);
err = -EINVAL;
goto error_cs;
}
gmi->snor_config |= TEGRA_GMI_CS_SELECT(property);
/* The default values that are provided below are reset values */
if (!of_property_read_u32(child, "nvidia,snor-muxed-width", &property))
gmi->snor_timing0 |= TEGRA_GMI_MUXED_WIDTH(property);
else
gmi->snor_timing0 |= TEGRA_GMI_MUXED_WIDTH(1);
if (!of_property_read_u32(child, "nvidia,snor-hold-width", &property))
gmi->snor_timing0 |= TEGRA_GMI_HOLD_WIDTH(property);
else
gmi->snor_timing0 |= TEGRA_GMI_HOLD_WIDTH(1);
if (!of_property_read_u32(child, "nvidia,snor-adv-width", &property))
gmi->snor_timing0 |= TEGRA_GMI_ADV_WIDTH(property);
else
gmi->snor_timing0 |= TEGRA_GMI_ADV_WIDTH(1);
if (!of_property_read_u32(child, "nvidia,snor-ce-width", &property))
gmi->snor_timing0 |= TEGRA_GMI_CE_WIDTH(property);
else
gmi->snor_timing0 |= TEGRA_GMI_CE_WIDTH(4);
if (!of_property_read_u32(child, "nvidia,snor-we-width", &property))
gmi->snor_timing1 |= TEGRA_GMI_WE_WIDTH(property);
else
gmi->snor_timing1 |= TEGRA_GMI_WE_WIDTH(1);
if (!of_property_read_u32(child, "nvidia,snor-oe-width", &property))
gmi->snor_timing1 |= TEGRA_GMI_OE_WIDTH(property);
else
gmi->snor_timing1 |= TEGRA_GMI_OE_WIDTH(1);
if (!of_property_read_u32(child, "nvidia,snor-wait-width", &property))
gmi->snor_timing1 |= TEGRA_GMI_WAIT_WIDTH(property);
else
gmi->snor_timing1 |= TEGRA_GMI_WAIT_WIDTH(3);
error_cs:
of_node_put(child);
return err;
}
static int tegra_gmi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct tegra_gmi *gmi;
struct resource *res;
int err;
gmi = devm_kzalloc(dev, sizeof(*gmi), GFP_KERNEL);
if (!gmi)
return -ENOMEM;
gmi->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
gmi->base = devm_ioremap_resource(dev, res);
if (IS_ERR(gmi->base))
return PTR_ERR(gmi->base);
gmi->clk = devm_clk_get(dev, "gmi");
if (IS_ERR(gmi->clk)) {
dev_err(dev, "can not get clock\n");
return PTR_ERR(gmi->clk);
}
gmi->rst = devm_reset_control_get(dev, "gmi");
if (IS_ERR(gmi->rst)) {
dev_err(dev, "can not get reset\n");
return PTR_ERR(gmi->rst);
}
err = tegra_gmi_parse_dt(gmi);
if (err)
return err;
err = tegra_gmi_enable(gmi);
if (err < 0)
return err;
err = of_platform_default_populate(dev->of_node, NULL, dev);
if (err < 0) {
dev_err(dev, "fail to create devices.\n");
tegra_gmi_disable(gmi);
return err;
}
platform_set_drvdata(pdev, gmi);
return 0;
}
static int tegra_gmi_remove(struct platform_device *pdev)
{
struct tegra_gmi *gmi = platform_get_drvdata(pdev);
of_platform_depopulate(gmi->dev);
tegra_gmi_disable(gmi);
return 0;
}
static const struct of_device_id tegra_gmi_id_table[] = {
{ .compatible = "nvidia,tegra20-gmi", },
{ .compatible = "nvidia,tegra30-gmi", },
{ }
};
MODULE_DEVICE_TABLE(of, tegra_gmi_id_table);
static struct platform_driver tegra_gmi_driver = {
.probe = tegra_gmi_probe,
.remove = tegra_gmi_remove,
.driver = {
.name = "tegra-gmi",
.of_match_table = tegra_gmi_id_table,
},
};
module_platform_driver(tegra_gmi_driver);
MODULE_AUTHOR("Mirza Krak <mirza.krak@gmail.com");
MODULE_DESCRIPTION("NVIDIA Tegra GMI Bus Driver");
MODULE_LICENSE("GPL v2");
......@@ -8,6 +8,17 @@ menu "Firmware Drivers"
config ARM_PSCI_FW
bool
config ARM_PSCI_CHECKER
bool "ARM PSCI checker"
depends on ARM_PSCI_FW && HOTPLUG_CPU && !TORTURE_TEST
help
Run the PSCI checker during startup. This checks that hotplug and
suspend operations work correctly when using PSCI.
The torture tests may interfere with the PSCI checker by turning CPUs
on and off through hotplug, so for now torture tests and PSCI checker
are mutually exclusive.
config ARM_SCPI_PROTOCOL
tristate "ARM System Control and Power Interface (SCPI) Message Protocol"
depends on MAILBOX
......@@ -203,6 +214,21 @@ config QCOM_SCM_64
def_bool y
depends on QCOM_SCM && ARM64
config TI_SCI_PROTOCOL
tristate "TI System Control Interface (TISCI) Message Protocol"
depends on TI_MESSAGE_MANAGER
help
TI System Control Interface (TISCI) Message Protocol is used to manage
compute systems such as ARM, DSP etc with the system controller in
complex System on Chip(SoC) such as those found on certain keystone
generation SoC from TI.
System controller provides various facilities including power
management function support.
This protocol library is used by client drivers to use the features
provided by the system controller.
config HAVE_ARM_SMCCC
bool
......@@ -210,5 +236,6 @@ source "drivers/firmware/broadcom/Kconfig"
source "drivers/firmware/google/Kconfig"
source "drivers/firmware/efi/Kconfig"
source "drivers/firmware/meson/Kconfig"
source "drivers/firmware/tegra/Kconfig"
endmenu
......@@ -2,6 +2,7 @@
# Makefile for the linux kernel.
#
obj-$(CONFIG_ARM_PSCI_FW) += psci.o
obj-$(CONFIG_ARM_PSCI_CHECKER) += psci_checker.o
obj-$(CONFIG_ARM_SCPI_PROTOCOL) += arm_scpi.o
obj-$(CONFIG_ARM_SCPI_POWER_DOMAIN) += scpi_pm_domain.o
obj-$(CONFIG_DMI) += dmi_scan.o
......@@ -20,9 +21,11 @@ obj-$(CONFIG_QCOM_SCM) += qcom_scm.o
obj-$(CONFIG_QCOM_SCM_64) += qcom_scm-64.o
obj-$(CONFIG_QCOM_SCM_32) += qcom_scm-32.o
CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a
obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o
obj-y += broadcom/
obj-y += meson/
obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
obj-$(CONFIG_EFI) += efi/
obj-$(CONFIG_UEFI_CPER) += efi/
obj-y += tegra/
......@@ -50,20 +50,27 @@
#define CMD_TOKEN_ID_MASK 0xff
#define CMD_DATA_SIZE_SHIFT 16
#define CMD_DATA_SIZE_MASK 0x1ff
#define CMD_LEGACY_DATA_SIZE_SHIFT 20
#define CMD_LEGACY_DATA_SIZE_MASK 0x1ff
#define PACK_SCPI_CMD(cmd_id, tx_sz) \
((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) | \
(((tx_sz) & CMD_DATA_SIZE_MASK) << CMD_DATA_SIZE_SHIFT))
#define ADD_SCPI_TOKEN(cmd, token) \
((cmd) |= (((token) & CMD_TOKEN_ID_MASK) << CMD_TOKEN_ID_SHIFT))
#define PACK_LEGACY_SCPI_CMD(cmd_id, tx_sz) \
((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) | \
(((tx_sz) & CMD_LEGACY_DATA_SIZE_MASK) << CMD_LEGACY_DATA_SIZE_SHIFT))
#define CMD_SIZE(cmd) (((cmd) >> CMD_DATA_SIZE_SHIFT) & CMD_DATA_SIZE_MASK)
#define CMD_LEGACY_SIZE(cmd) (((cmd) >> CMD_LEGACY_DATA_SIZE_SHIFT) & \
CMD_LEGACY_DATA_SIZE_MASK)
#define CMD_UNIQ_MASK (CMD_TOKEN_ID_MASK << CMD_TOKEN_ID_SHIFT | CMD_ID_MASK)
#define CMD_XTRACT_UNIQ(cmd) ((cmd) & CMD_UNIQ_MASK)
#define SCPI_SLOT 0
#define MAX_DVFS_DOMAINS 8
#define MAX_DVFS_OPPS 8
#define MAX_DVFS_OPPS 16
#define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16)
#define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff)
......@@ -99,6 +106,7 @@ enum scpi_error_codes {
SCPI_ERR_MAX
};
/* SCPI Standard commands */
enum scpi_std_cmd {
SCPI_CMD_INVALID = 0x00,
SCPI_CMD_SCPI_READY = 0x01,
......@@ -132,6 +140,108 @@ enum scpi_std_cmd {
SCPI_CMD_COUNT
};
/* SCPI Legacy Commands */
enum legacy_scpi_std_cmd {
LEGACY_SCPI_CMD_INVALID = 0x00,
LEGACY_SCPI_CMD_SCPI_READY = 0x01,
LEGACY_SCPI_CMD_SCPI_CAPABILITIES = 0x02,
LEGACY_SCPI_CMD_EVENT = 0x03,
LEGACY_SCPI_CMD_SET_CSS_PWR_STATE = 0x04,
LEGACY_SCPI_CMD_GET_CSS_PWR_STATE = 0x05,
LEGACY_SCPI_CMD_CFG_PWR_STATE_STAT = 0x06,
LEGACY_SCPI_CMD_GET_PWR_STATE_STAT = 0x07,
LEGACY_SCPI_CMD_SYS_PWR_STATE = 0x08,
LEGACY_SCPI_CMD_L2_READY = 0x09,
LEGACY_SCPI_CMD_SET_AP_TIMER = 0x0a,
LEGACY_SCPI_CMD_CANCEL_AP_TIME = 0x0b,
LEGACY_SCPI_CMD_DVFS_CAPABILITIES = 0x0c,
LEGACY_SCPI_CMD_GET_DVFS_INFO = 0x0d,
LEGACY_SCPI_CMD_SET_DVFS = 0x0e,
LEGACY_SCPI_CMD_GET_DVFS = 0x0f,
LEGACY_SCPI_CMD_GET_DVFS_STAT = 0x10,
LEGACY_SCPI_CMD_SET_RTC = 0x11,
LEGACY_SCPI_CMD_GET_RTC = 0x12,
LEGACY_SCPI_CMD_CLOCK_CAPABILITIES = 0x13,
LEGACY_SCPI_CMD_SET_CLOCK_INDEX = 0x14,
LEGACY_SCPI_CMD_SET_CLOCK_VALUE = 0x15,
LEGACY_SCPI_CMD_GET_CLOCK_VALUE = 0x16,
LEGACY_SCPI_CMD_PSU_CAPABILITIES = 0x17,
LEGACY_SCPI_CMD_SET_PSU = 0x18,
LEGACY_SCPI_CMD_GET_PSU = 0x19,
LEGACY_SCPI_CMD_SENSOR_CAPABILITIES = 0x1a,
LEGACY_SCPI_CMD_SENSOR_INFO = 0x1b,
LEGACY_SCPI_CMD_SENSOR_VALUE = 0x1c,
LEGACY_SCPI_CMD_SENSOR_CFG_PERIODIC = 0x1d,
LEGACY_SCPI_CMD_SENSOR_CFG_BOUNDS = 0x1e,
LEGACY_SCPI_CMD_SENSOR_ASYNC_VALUE = 0x1f,
LEGACY_SCPI_CMD_COUNT
};
/* List all commands that are required to go through the high priority link */
static int legacy_hpriority_cmds[] = {
LEGACY_SCPI_CMD_GET_CSS_PWR_STATE,
LEGACY_SCPI_CMD_CFG_PWR_STATE_STAT,
LEGACY_SCPI_CMD_GET_PWR_STATE_STAT,
LEGACY_SCPI_CMD_SET_DVFS,
LEGACY_SCPI_CMD_GET_DVFS,
LEGACY_SCPI_CMD_SET_RTC,
LEGACY_SCPI_CMD_GET_RTC,
LEGACY_SCPI_CMD_SET_CLOCK_INDEX,
LEGACY_SCPI_CMD_SET_CLOCK_VALUE,
LEGACY_SCPI_CMD_GET_CLOCK_VALUE,
LEGACY_SCPI_CMD_SET_PSU,
LEGACY_SCPI_CMD_GET_PSU,
LEGACY_SCPI_CMD_SENSOR_CFG_PERIODIC,
LEGACY_SCPI_CMD_SENSOR_CFG_BOUNDS,
};
/* List all commands used by this driver, used as indexes */
enum scpi_drv_cmds {
CMD_SCPI_CAPABILITIES = 0,
CMD_GET_CLOCK_INFO,
CMD_GET_CLOCK_VALUE,
CMD_SET_CLOCK_VALUE,
CMD_GET_DVFS,
CMD_SET_DVFS,
CMD_GET_DVFS_INFO,
CMD_SENSOR_CAPABILITIES,
CMD_SENSOR_INFO,
CMD_SENSOR_VALUE,
CMD_SET_DEVICE_PWR_STATE,
CMD_GET_DEVICE_PWR_STATE,
CMD_MAX_COUNT,
};
static int scpi_std_commands[CMD_MAX_COUNT] = {
SCPI_CMD_SCPI_CAPABILITIES,
SCPI_CMD_GET_CLOCK_INFO,
SCPI_CMD_GET_CLOCK_VALUE,
SCPI_CMD_SET_CLOCK_VALUE,
SCPI_CMD_GET_DVFS,
SCPI_CMD_SET_DVFS,
SCPI_CMD_GET_DVFS_INFO,
SCPI_CMD_SENSOR_CAPABILITIES,
SCPI_CMD_SENSOR_INFO,
SCPI_CMD_SENSOR_VALUE,
SCPI_CMD_SET_DEVICE_PWR_STATE,
SCPI_CMD_GET_DEVICE_PWR_STATE,
};
static int scpi_legacy_commands[CMD_MAX_COUNT] = {
LEGACY_SCPI_CMD_SCPI_CAPABILITIES,
-1, /* GET_CLOCK_INFO */
LEGACY_SCPI_CMD_GET_CLOCK_VALUE,
LEGACY_SCPI_CMD_SET_CLOCK_VALUE,
LEGACY_SCPI_CMD_GET_DVFS,
LEGACY_SCPI_CMD_SET_DVFS,
LEGACY_SCPI_CMD_GET_DVFS_INFO,
LEGACY_SCPI_CMD_SENSOR_CAPABILITIES,
LEGACY_SCPI_CMD_SENSOR_INFO,
LEGACY_SCPI_CMD_SENSOR_VALUE,
-1, /* SET_DEVICE_PWR_STATE */
-1, /* GET_DEVICE_PWR_STATE */
};
struct scpi_xfer {
u32 slot; /* has to be first element */
u32 cmd;
......@@ -160,7 +270,10 @@ struct scpi_chan {
struct scpi_drvinfo {
u32 protocol_version;
u32 firmware_version;
bool is_legacy;
int num_chans;
int *commands;
DECLARE_BITMAP(cmd_priority, LEGACY_SCPI_CMD_COUNT);
atomic_t next_chan;
struct scpi_ops *scpi_ops;
struct scpi_chan *channels;
......@@ -177,6 +290,11 @@ struct scpi_shared_mem {
u8 payload[0];
} __packed;
struct legacy_scpi_shared_mem {
__le32 status;
u8 payload[0];
} __packed;
struct scp_capabilities {
__le32 protocol_version;
__le32 event_version;
......@@ -202,6 +320,12 @@ struct clk_set_value {
__le32 rate;
} __packed;
struct legacy_clk_set_value {
__le32 rate;
__le16 id;
__le16 reserved;
} __packed;
struct dvfs_info {
__le32 header;
struct {
......@@ -273,19 +397,43 @@ static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd)
return;
}
list_for_each_entry(t, &ch->rx_pending, node)
if (CMD_XTRACT_UNIQ(t->cmd) == CMD_XTRACT_UNIQ(cmd)) {
list_del(&t->node);
match = t;
break;
}
/* Command type is not replied by the SCP Firmware in legacy Mode
* We should consider that command is the head of pending RX commands
* if the list is not empty. In TX only mode, the list would be empty.
*/
if (scpi_info->is_legacy) {
match = list_first_entry(&ch->rx_pending, struct scpi_xfer,
node);
list_del(&match->node);
} else {
list_for_each_entry(t, &ch->rx_pending, node)
if (CMD_XTRACT_UNIQ(t->cmd) == CMD_XTRACT_UNIQ(cmd)) {
list_del(&t->node);
match = t;
break;
}
}
/* check if wait_for_completion is in progress or timed-out */
if (match && !completion_done(&match->done)) {
struct scpi_shared_mem *mem = ch->rx_payload;
unsigned int len = min(match->rx_len, CMD_SIZE(cmd));
unsigned int len;
if (scpi_info->is_legacy) {
struct legacy_scpi_shared_mem *mem = ch->rx_payload;
/* RX Length is not replied by the legacy Firmware */
len = match->rx_len;
match->status = le32_to_cpu(mem->status);
memcpy_fromio(match->rx_buf, mem->payload, len);
} else {
struct scpi_shared_mem *mem = ch->rx_payload;
len = min(match->rx_len, CMD_SIZE(cmd));
match->status = le32_to_cpu(mem->status);
memcpy_fromio(match->rx_buf, mem->payload, len);
}
match->status = le32_to_cpu(mem->status);
memcpy_fromio(match->rx_buf, mem->payload, len);
if (match->rx_len > len)
memset(match->rx_buf + len, 0, match->rx_len - len);
complete(&match->done);
......@@ -297,7 +445,10 @@ static void scpi_handle_remote_msg(struct mbox_client *c, void *msg)
{
struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
struct scpi_shared_mem *mem = ch->rx_payload;
u32 cmd = le32_to_cpu(mem->command);
u32 cmd = 0;
if (!scpi_info->is_legacy)
cmd = le32_to_cpu(mem->command);
scpi_process_cmd(ch, cmd);
}
......@@ -309,8 +460,13 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg)
struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload;
if (t->tx_buf)
memcpy_toio(mem->payload, t->tx_buf, t->tx_len);
if (t->tx_buf) {
if (scpi_info->is_legacy)
memcpy_toio(ch->tx_payload, t->tx_buf, t->tx_len);
else
memcpy_toio(mem->payload, t->tx_buf, t->tx_len);
}
if (t->rx_buf) {
if (!(++ch->token))
++ch->token;
......@@ -319,7 +475,9 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg)
list_add_tail(&t->node, &ch->rx_pending);
spin_unlock_irqrestore(&ch->rx_lock, flags);
}
mem->command = cpu_to_le32(t->cmd);
if (!scpi_info->is_legacy)
mem->command = cpu_to_le32(t->cmd);
}
static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch)
......@@ -344,23 +502,38 @@ static void put_scpi_xfer(struct scpi_xfer *t, struct scpi_chan *ch)
mutex_unlock(&ch->xfers_lock);
}
static int scpi_send_message(u8 cmd, void *tx_buf, unsigned int tx_len,
static int scpi_send_message(u8 idx, void *tx_buf, unsigned int tx_len,
void *rx_buf, unsigned int rx_len)
{
int ret;
u8 chan;
u8 cmd;
struct scpi_xfer *msg;
struct scpi_chan *scpi_chan;
chan = atomic_inc_return(&scpi_info->next_chan) % scpi_info->num_chans;
if (scpi_info->commands[idx] < 0)
return -EOPNOTSUPP;
cmd = scpi_info->commands[idx];
if (scpi_info->is_legacy)
chan = test_bit(cmd, scpi_info->cmd_priority) ? 1 : 0;
else
chan = atomic_inc_return(&scpi_info->next_chan) %
scpi_info->num_chans;
scpi_chan = scpi_info->channels + chan;
msg = get_scpi_xfer(scpi_chan);
if (!msg)
return -ENOMEM;
msg->slot = BIT(SCPI_SLOT);
msg->cmd = PACK_SCPI_CMD(cmd, tx_len);
if (scpi_info->is_legacy) {
msg->cmd = PACK_LEGACY_SCPI_CMD(cmd, tx_len);
msg->slot = msg->cmd;
} else {
msg->slot = BIT(SCPI_SLOT);
msg->cmd = PACK_SCPI_CMD(cmd, tx_len);
}
msg->tx_buf = tx_buf;
msg->tx_len = tx_len;
msg->rx_buf = rx_buf;
......@@ -397,7 +570,7 @@ scpi_clk_get_range(u16 clk_id, unsigned long *min, unsigned long *max)
struct clk_get_info clk;
__le16 le_clk_id = cpu_to_le16(clk_id);
ret = scpi_send_message(SCPI_CMD_GET_CLOCK_INFO, &le_clk_id,
ret = scpi_send_message(CMD_GET_CLOCK_INFO, &le_clk_id,
sizeof(le_clk_id), &clk, sizeof(clk));
if (!ret) {
*min = le32_to_cpu(clk.min_rate);
......@@ -412,8 +585,9 @@ static unsigned long scpi_clk_get_val(u16 clk_id)
struct clk_get_value clk;
__le16 le_clk_id = cpu_to_le16(clk_id);
ret = scpi_send_message(SCPI_CMD_GET_CLOCK_VALUE, &le_clk_id,
ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id,
sizeof(le_clk_id), &clk, sizeof(clk));
return ret ? ret : le32_to_cpu(clk.rate);
}
......@@ -425,7 +599,19 @@ static int scpi_clk_set_val(u16 clk_id, unsigned long rate)
.rate = cpu_to_le32(rate)
};
return scpi_send_message(SCPI_CMD_SET_CLOCK_VALUE, &clk, sizeof(clk),
return scpi_send_message(CMD_SET_CLOCK_VALUE, &clk, sizeof(clk),
&stat, sizeof(stat));
}
static int legacy_scpi_clk_set_val(u16 clk_id, unsigned long rate)
{
int stat;
struct legacy_clk_set_value clk = {
.id = cpu_to_le16(clk_id),
.rate = cpu_to_le32(rate)
};
return scpi_send_message(CMD_SET_CLOCK_VALUE, &clk, sizeof(clk),
&stat, sizeof(stat));
}
......@@ -434,8 +620,9 @@ static int scpi_dvfs_get_idx(u8 domain)
int ret;
u8 dvfs_idx;
ret = scpi_send_message(SCPI_CMD_GET_DVFS, &domain, sizeof(domain),
ret = scpi_send_message(CMD_GET_DVFS, &domain, sizeof(domain),
&dvfs_idx, sizeof(dvfs_idx));
return ret ? ret : dvfs_idx;
}
......@@ -444,7 +631,7 @@ static int scpi_dvfs_set_idx(u8 domain, u8 index)
int stat;
struct dvfs_set dvfs = {domain, index};
return scpi_send_message(SCPI_CMD_SET_DVFS, &dvfs, sizeof(dvfs),
return scpi_send_message(CMD_SET_DVFS, &dvfs, sizeof(dvfs),
&stat, sizeof(stat));
}
......@@ -468,9 +655,8 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
if (scpi_info->dvfs[domain]) /* data already populated */
return scpi_info->dvfs[domain];
ret = scpi_send_message(SCPI_CMD_GET_DVFS_INFO, &domain, sizeof(domain),
ret = scpi_send_message(CMD_GET_DVFS_INFO, &domain, sizeof(domain),
&buf, sizeof(buf));
if (ret)
return ERR_PTR(ret);
......@@ -503,7 +689,7 @@ static int scpi_sensor_get_capability(u16 *sensors)
struct sensor_capabilities cap_buf;
int ret;
ret = scpi_send_message(SCPI_CMD_SENSOR_CAPABILITIES, NULL, 0, &cap_buf,
ret = scpi_send_message(CMD_SENSOR_CAPABILITIES, NULL, 0, &cap_buf,
sizeof(cap_buf));
if (!ret)
*sensors = le16_to_cpu(cap_buf.sensors);
......@@ -517,7 +703,7 @@ static int scpi_sensor_get_info(u16 sensor_id, struct scpi_sensor_info *info)
struct _scpi_sensor_info _info;
int ret;
ret = scpi_send_message(SCPI_CMD_SENSOR_INFO, &id, sizeof(id),
ret = scpi_send_message(CMD_SENSOR_INFO, &id, sizeof(id),
&_info, sizeof(_info));
if (!ret) {
memcpy(info, &_info, sizeof(*info));
......@@ -533,7 +719,7 @@ static int scpi_sensor_get_value(u16 sensor, u64 *val)
struct sensor_value buf;
int ret;
ret = scpi_send_message(SCPI_CMD_SENSOR_VALUE, &id, sizeof(id),
ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id),
&buf, sizeof(buf));
if (!ret)
*val = (u64)le32_to_cpu(buf.hi_val) << 32 |
......@@ -548,7 +734,7 @@ static int scpi_device_get_power_state(u16 dev_id)
u8 pstate;
__le16 id = cpu_to_le16(dev_id);
ret = scpi_send_message(SCPI_CMD_GET_DEVICE_PWR_STATE, &id,
ret = scpi_send_message(CMD_GET_DEVICE_PWR_STATE, &id,
sizeof(id), &pstate, sizeof(pstate));
return ret ? ret : pstate;
}
......@@ -561,7 +747,7 @@ static int scpi_device_set_power_state(u16 dev_id, u8 pstate)
.pstate = pstate,
};
return scpi_send_message(SCPI_CMD_SET_DEVICE_PWR_STATE, &dev_set,
return scpi_send_message(CMD_SET_DEVICE_PWR_STATE, &dev_set,
sizeof(dev_set), &stat, sizeof(stat));
}
......@@ -591,12 +777,16 @@ static int scpi_init_versions(struct scpi_drvinfo *info)
int ret;
struct scp_capabilities caps;
ret = scpi_send_message(SCPI_CMD_SCPI_CAPABILITIES, NULL, 0,
ret = scpi_send_message(CMD_SCPI_CAPABILITIES, NULL, 0,
&caps, sizeof(caps));
if (!ret) {
info->protocol_version = le32_to_cpu(caps.protocol_version);
info->firmware_version = le32_to_cpu(caps.platform_version);
}
/* Ignore error if not implemented */
if (scpi_info->is_legacy && ret == -EOPNOTSUPP)
return 0;
return ret;
}
......@@ -681,6 +871,11 @@ static int scpi_alloc_xfer_list(struct device *dev, struct scpi_chan *ch)
return 0;
}
static const struct of_device_id legacy_scpi_of_match[] = {
{.compatible = "arm,scpi-pre-1.0"},
{},
};
static int scpi_probe(struct platform_device *pdev)
{
int count, idx, ret;
......@@ -693,6 +888,9 @@ static int scpi_probe(struct platform_device *pdev)
if (!scpi_info)
return -ENOMEM;
if (of_match_device(legacy_scpi_of_match, &pdev->dev))
scpi_info->is_legacy = true;
count = of_count_phandle_with_args(np, "mboxes", "#mbox-cells");
if (count < 0) {
dev_err(dev, "no mboxes property in '%s'\n", np->full_name);
......@@ -755,8 +953,21 @@ static int scpi_probe(struct platform_device *pdev)
scpi_info->channels = scpi_chan;
scpi_info->num_chans = count;
scpi_info->commands = scpi_std_commands;
platform_set_drvdata(pdev, scpi_info);
if (scpi_info->is_legacy) {
/* Replace with legacy variants */
scpi_ops.clk_set_val = legacy_scpi_clk_set_val;
scpi_info->commands = scpi_legacy_commands;
/* Fill priority bitmap */
for (idx = 0; idx < ARRAY_SIZE(legacy_hpriority_cmds); idx++)
set_bit(legacy_hpriority_cmds[idx],
scpi_info->cmd_priority);
}
ret = scpi_init_versions(scpi_info);
if (ret) {
dev_err(dev, "incorrect or no SCP firmware found\n");
......@@ -781,6 +992,7 @@ static int scpi_probe(struct platform_device *pdev)
static const struct of_device_id scpi_of_match[] = {
{.compatible = "arm,scpi"},
{.compatible = "arm,scpi-pre-1.0"},
{},
};
......
......@@ -630,7 +630,7 @@ int __init psci_dt_init(void)
np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np);
if (!np)
if (!np || !of_device_is_available(np))
return -ENODEV;
init_fn = (psci_initcall_t)matched_np->data;
......
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Copyright (C) 2016 ARM Limited
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/atomic.h>
#include <linux/completion.h>
#include <linux/cpu.h>
#include <linux/cpuidle.h>
#include <linux/cpu_pm.h>
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/module.h>
#include <linux/preempt.h>
#include <linux/psci.h>
#include <linux/slab.h>
#include <linux/tick.h>
#include <linux/topology.h>
#include <asm/cpuidle.h>
#include <uapi/linux/psci.h>
#define NUM_SUSPEND_CYCLE (10)
static unsigned int nb_available_cpus;
static int tos_resident_cpu = -1;
static atomic_t nb_active_threads;
static struct completion suspend_threads_started =
COMPLETION_INITIALIZER(suspend_threads_started);
static struct completion suspend_threads_done =
COMPLETION_INITIALIZER(suspend_threads_done);
/*
* We assume that PSCI operations are used if they are available. This is not
* necessarily true on arm64, since the decision is based on the
* "enable-method" property of each CPU in the DT, but given that there is no
* arch-specific way to check this, we assume that the DT is sensible.
*/
static int psci_ops_check(void)
{
int migrate_type = -1;
int cpu;
if (!(psci_ops.cpu_off && psci_ops.cpu_on && psci_ops.cpu_suspend)) {
pr_warn("Missing PSCI operations, aborting tests\n");
return -EOPNOTSUPP;
}
if (psci_ops.migrate_info_type)
migrate_type = psci_ops.migrate_info_type();
if (migrate_type == PSCI_0_2_TOS_UP_MIGRATE ||
migrate_type == PSCI_0_2_TOS_UP_NO_MIGRATE) {
/* There is a UP Trusted OS, find on which core it resides. */
for_each_online_cpu(cpu)
if (psci_tos_resident_on(cpu)) {
tos_resident_cpu = cpu;
break;
}
if (tos_resident_cpu == -1)
pr_warn("UP Trusted OS resides on no online CPU\n");
}
return 0;
}
static int find_clusters(const struct cpumask *cpus,
const struct cpumask **clusters)
{
unsigned int nb = 0;
cpumask_var_t tmp;
if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
return -ENOMEM;
cpumask_copy(tmp, cpus);
while (!cpumask_empty(tmp)) {
const struct cpumask *cluster =
topology_core_cpumask(cpumask_any(tmp));
clusters[nb++] = cluster;
cpumask_andnot(tmp, tmp, cluster);
}
free_cpumask_var(tmp);
return nb;
}
/*
* offlined_cpus is a temporary array but passing it as an argument avoids
* multiple allocations.
*/
static unsigned int down_and_up_cpus(const struct cpumask *cpus,
struct cpumask *offlined_cpus)
{
int cpu;
int err = 0;
cpumask_clear(offlined_cpus);
/* Try to power down all CPUs in the mask. */
for_each_cpu(cpu, cpus) {
int ret = cpu_down(cpu);
/*
* cpu_down() checks the number of online CPUs before the TOS
* resident CPU.
*/
if (cpumask_weight(offlined_cpus) + 1 == nb_available_cpus) {
if (ret != -EBUSY) {
pr_err("Unexpected return code %d while trying "
"to power down last online CPU %d\n",
ret, cpu);
++err;
}
} else if (cpu == tos_resident_cpu) {
if (ret != -EPERM) {
pr_err("Unexpected return code %d while trying "
"to power down TOS resident CPU %d\n",
ret, cpu);
++err;
}
} else if (ret != 0) {
pr_err("Error occurred (%d) while trying "
"to power down CPU %d\n", ret, cpu);
++err;
}
if (ret == 0)
cpumask_set_cpu(cpu, offlined_cpus);
}
/* Try to power up all the CPUs that have been offlined. */
for_each_cpu(cpu, offlined_cpus) {
int ret = cpu_up(cpu);
if (ret != 0) {
pr_err("Error occurred (%d) while trying "
"to power up CPU %d\n", ret, cpu);
++err;
} else {
cpumask_clear_cpu(cpu, offlined_cpus);
}
}
/*
* Something went bad at some point and some CPUs could not be turned
* back on.
*/
WARN_ON(!cpumask_empty(offlined_cpus) ||
num_online_cpus() != nb_available_cpus);
return err;
}
static int hotplug_tests(void)
{
int err;
cpumask_var_t offlined_cpus;
int i, nb_cluster;
const struct cpumask **clusters;
char *page_buf;
err = -ENOMEM;
if (!alloc_cpumask_var(&offlined_cpus, GFP_KERNEL))
return err;
/* We may have up to nb_available_cpus clusters. */
clusters = kmalloc_array(nb_available_cpus, sizeof(*clusters),
GFP_KERNEL);
if (!clusters)
goto out_free_cpus;
page_buf = (char *)__get_free_page(GFP_KERNEL);
if (!page_buf)
goto out_free_clusters;
err = 0;
nb_cluster = find_clusters(cpu_online_mask, clusters);
/*
* Of course the last CPU cannot be powered down and cpu_down() should
* refuse doing that.
*/
pr_info("Trying to turn off and on again all CPUs\n");
err += down_and_up_cpus(cpu_online_mask, offlined_cpus);
/*
* Take down CPUs by cluster this time. When the last CPU is turned
* off, the cluster itself should shut down.
*/
for (i = 0; i < nb_cluster; ++i) {
int cluster_id =
topology_physical_package_id(cpumask_any(clusters[i]));
ssize_t len = cpumap_print_to_pagebuf(true, page_buf,
clusters[i]);
/* Remove trailing newline. */
page_buf[len - 1] = '\0';
pr_info("Trying to turn off and on again cluster %d "
"(CPUs %s)\n", cluster_id, page_buf);
err += down_and_up_cpus(clusters[i], offlined_cpus);
}
free_page((unsigned long)page_buf);
out_free_clusters:
kfree(clusters);
out_free_cpus:
free_cpumask_var(offlined_cpus);
return err;
}
static void dummy_callback(unsigned long ignored) {}
static int suspend_cpu(int index, bool broadcast)
{
int ret;
arch_cpu_idle_enter();
if (broadcast) {
/*
* The local timer will be shut down, we need to enter tick
* broadcast.
*/
ret = tick_broadcast_enter();
if (ret) {
/*
* In the absence of hardware broadcast mechanism,
* this CPU might be used to broadcast wakeups, which
* may be why entering tick broadcast has failed.
* There is little the kernel can do to work around
* that, so enter WFI instead (idle state 0).
*/
cpu_do_idle();
ret = 0;
goto out_arch_exit;
}
}
/*
* Replicate the common ARM cpuidle enter function
* (arm_enter_idle_state).
*/
ret = CPU_PM_CPU_IDLE_ENTER(arm_cpuidle_suspend, index);
if (broadcast)
tick_broadcast_exit();
out_arch_exit:
arch_cpu_idle_exit();
return ret;
}
static int suspend_test_thread(void *arg)
{
int cpu = (long)arg;
int i, nb_suspend = 0, nb_shallow_sleep = 0, nb_err = 0;
struct sched_param sched_priority = { .sched_priority = MAX_RT_PRIO-1 };
struct cpuidle_device *dev;
struct cpuidle_driver *drv;
/* No need for an actual callback, we just want to wake up the CPU. */
struct timer_list wakeup_timer =
TIMER_INITIALIZER(dummy_callback, 0, 0);
/* Wait for the main thread to give the start signal. */
wait_for_completion(&suspend_threads_started);
/* Set maximum priority to preempt all other threads on this CPU. */
if (sched_setscheduler_nocheck(current, SCHED_FIFO, &sched_priority))
pr_warn("Failed to set suspend thread scheduler on CPU %d\n",
cpu);
dev = this_cpu_read(cpuidle_devices);
drv = cpuidle_get_cpu_driver(dev);
pr_info("CPU %d entering suspend cycles, states 1 through %d\n",
cpu, drv->state_count - 1);
for (i = 0; i < NUM_SUSPEND_CYCLE; ++i) {
int index;
/*
* Test all possible states, except 0 (which is usually WFI and
* doesn't use PSCI).
*/
for (index = 1; index < drv->state_count; ++index) {
struct cpuidle_state *state = &drv->states[index];
bool broadcast = state->flags & CPUIDLE_FLAG_TIMER_STOP;
int ret;
/*
* Set the timer to wake this CPU up in some time (which
* should be largely sufficient for entering suspend).
* If the local tick is disabled when entering suspend,
* suspend_cpu() takes care of switching to a broadcast
* tick, so the timer will still wake us up.
*/
mod_timer(&wakeup_timer, jiffies +
usecs_to_jiffies(state->target_residency));
/* IRQs must be disabled during suspend operations. */
local_irq_disable();
ret = suspend_cpu(index, broadcast);
/*
* We have woken up. Re-enable IRQs to handle any
* pending interrupt, do not wait until the end of the
* loop.
*/
local_irq_enable();
if (ret == index) {
++nb_suspend;
} else if (ret >= 0) {
/* We did not enter the expected state. */
++nb_shallow_sleep;
} else {
pr_err("Failed to suspend CPU %d: error %d "
"(requested state %d, cycle %d)\n",
cpu, ret, index, i);
++nb_err;
}
}
}
/*
* Disable the timer to make sure that the timer will not trigger
* later.
*/
del_timer(&wakeup_timer);
if (atomic_dec_return_relaxed(&nb_active_threads) == 0)
complete(&suspend_threads_done);
/* Give up on RT scheduling and wait for termination. */
sched_priority.sched_priority = 0;
if (sched_setscheduler_nocheck(current, SCHED_NORMAL, &sched_priority))
pr_warn("Failed to set suspend thread scheduler on CPU %d\n",
cpu);
for (;;) {
/* Needs to be set first to avoid missing a wakeup. */
set_current_state(TASK_INTERRUPTIBLE);
if (kthread_should_stop()) {
__set_current_state(TASK_RUNNING);
break;
}
schedule();
}
pr_info("CPU %d suspend test results: success %d, shallow states %d, errors %d\n",
cpu, nb_suspend, nb_shallow_sleep, nb_err);
return nb_err;
}
static int suspend_tests(void)
{
int i, cpu, err = 0;
struct task_struct **threads;
int nb_threads = 0;
threads = kmalloc_array(nb_available_cpus, sizeof(*threads),
GFP_KERNEL);
if (!threads)
return -ENOMEM;
/*
* Stop cpuidle to prevent the idle tasks from entering a deep sleep
* mode, as it might interfere with the suspend threads on other CPUs.
* This does not prevent the suspend threads from using cpuidle (only
* the idle tasks check this status). Take the idle lock so that
* the cpuidle driver and device look-up can be carried out safely.
*/
cpuidle_pause_and_lock();
for_each_online_cpu(cpu) {
struct task_struct *thread;
/* Check that cpuidle is available on that CPU. */
struct cpuidle_device *dev = per_cpu(cpuidle_devices, cpu);
struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
if (!dev || !drv) {
pr_warn("cpuidle not available on CPU %d, ignoring\n",
cpu);
continue;
}
thread = kthread_create_on_cpu(suspend_test_thread,
(void *)(long)cpu, cpu,
"psci_suspend_test");
if (IS_ERR(thread))
pr_err("Failed to create kthread on CPU %d\n", cpu);
else
threads[nb_threads++] = thread;
}
if (nb_threads < 1) {
err = -ENODEV;
goto out;
}
atomic_set(&nb_active_threads, nb_threads);
/*
* Wake up the suspend threads. To avoid the main thread being preempted
* before all the threads have been unparked, the suspend threads will
* wait for the completion of suspend_threads_started.
*/
for (i = 0; i < nb_threads; ++i)
wake_up_process(threads[i]);
complete_all(&suspend_threads_started);
wait_for_completion(&suspend_threads_done);
/* Stop and destroy all threads, get return status. */
for (i = 0; i < nb_threads; ++i)
err += kthread_stop(threads[i]);
out:
cpuidle_resume_and_unlock();
kfree(threads);
return err;
}
static int __init psci_checker(void)
{
int ret;
/*
* Since we're in an initcall, we assume that all the CPUs that all
* CPUs that can be onlined have been onlined.
*
* The tests assume that hotplug is enabled but nobody else is using it,
* otherwise the results will be unpredictable. However, since there
* is no userspace yet in initcalls, that should be fine, as long as
* no torture test is running at the same time (see Kconfig).
*/
nb_available_cpus = num_online_cpus();
/* Check PSCI operations are set up and working. */
ret = psci_ops_check();
if (ret)
return ret;
pr_info("PSCI checker started using %u CPUs\n", nb_available_cpus);
pr_info("Starting hotplug tests\n");
ret = hotplug_tests();
if (ret == 0)
pr_info("Hotplug tests passed OK\n");
else if (ret > 0)
pr_err("%d error(s) encountered in hotplug tests\n", ret);
else {
pr_err("Out of memory\n");
return ret;
}
pr_info("Starting suspend tests (%d cycles per state)\n",
NUM_SUSPEND_CYCLE);
ret = suspend_tests();
if (ret == 0)
pr_info("Suspend tests passed OK\n");
else if (ret > 0)
pr_err("%d error(s) encountered in suspend tests\n", ret);
else {
switch (ret) {
case -ENOMEM:
pr_err("Out of memory\n");
break;
case -ENODEV:
pr_warn("Could not start suspend tests on any CPU\n");
break;
}
}
pr_info("PSCI checker completed\n");
return ret < 0 ? ret : 0;
}
late_initcall(psci_checker);
......@@ -28,6 +28,10 @@
#include "qcom_scm.h"
#define SCM_HAS_CORE_CLK BIT(0)
#define SCM_HAS_IFACE_CLK BIT(1)
#define SCM_HAS_BUS_CLK BIT(2)
struct qcom_scm {
struct device *dev;
struct clk *core_clk;
......@@ -323,32 +327,40 @@ EXPORT_SYMBOL(qcom_scm_is_available);
static int qcom_scm_probe(struct platform_device *pdev)
{
struct qcom_scm *scm;
unsigned long clks;
int ret;
scm = devm_kzalloc(&pdev->dev, sizeof(*scm), GFP_KERNEL);
if (!scm)
return -ENOMEM;
scm->core_clk = devm_clk_get(&pdev->dev, "core");
if (IS_ERR(scm->core_clk)) {
if (PTR_ERR(scm->core_clk) == -EPROBE_DEFER)
clks = (unsigned long)of_device_get_match_data(&pdev->dev);
if (clks & SCM_HAS_CORE_CLK) {
scm->core_clk = devm_clk_get(&pdev->dev, "core");
if (IS_ERR(scm->core_clk)) {
if (PTR_ERR(scm->core_clk) != -EPROBE_DEFER)
dev_err(&pdev->dev,
"failed to acquire core clk\n");
return PTR_ERR(scm->core_clk);
scm->core_clk = NULL;
}
}
if (of_device_is_compatible(pdev->dev.of_node, "qcom,scm")) {
if (clks & SCM_HAS_IFACE_CLK) {
scm->iface_clk = devm_clk_get(&pdev->dev, "iface");
if (IS_ERR(scm->iface_clk)) {
if (PTR_ERR(scm->iface_clk) != -EPROBE_DEFER)
dev_err(&pdev->dev, "failed to acquire iface clk\n");
dev_err(&pdev->dev,
"failed to acquire iface clk\n");
return PTR_ERR(scm->iface_clk);
}
}
if (clks & SCM_HAS_BUS_CLK) {
scm->bus_clk = devm_clk_get(&pdev->dev, "bus");
if (IS_ERR(scm->bus_clk)) {
if (PTR_ERR(scm->bus_clk) != -EPROBE_DEFER)
dev_err(&pdev->dev, "failed to acquire bus clk\n");
dev_err(&pdev->dev,
"failed to acquire bus clk\n");
return PTR_ERR(scm->bus_clk);
}
}
......@@ -356,7 +368,9 @@ static int qcom_scm_probe(struct platform_device *pdev)
scm->reset.ops = &qcom_scm_pas_reset_ops;
scm->reset.nr_resets = 1;
scm->reset.of_node = pdev->dev.of_node;
reset_controller_register(&scm->reset);
ret = devm_reset_controller_register(&pdev->dev, &scm->reset);
if (ret)
return ret;
/* vote for max clk rate for highest performance */
ret = clk_set_rate(scm->core_clk, INT_MAX);
......@@ -372,10 +386,23 @@ static int qcom_scm_probe(struct platform_device *pdev)
}
static const struct of_device_id qcom_scm_dt_match[] = {
{ .compatible = "qcom,scm-apq8064",},
{ .compatible = "qcom,scm-msm8660",},
{ .compatible = "qcom,scm-msm8960",},
{ .compatible = "qcom,scm",},
{ .compatible = "qcom,scm-apq8064",
.data = (void *) SCM_HAS_CORE_CLK,
},
{ .compatible = "qcom,scm-msm8660",
.data = (void *) SCM_HAS_CORE_CLK,
},
{ .compatible = "qcom,scm-msm8960",
.data = (void *) SCM_HAS_CORE_CLK,
},
{ .compatible = "qcom,scm-msm8996",
.data = NULL, /* no clocks */
},
{ .compatible = "qcom,scm",
.data = (void *)(SCM_HAS_CORE_CLK
| SCM_HAS_IFACE_CLK
| SCM_HAS_BUS_CLK),
},
{}
};
......
menu "Tegra firmware driver"
config TEGRA_IVC
bool "Tegra IVC protocol"
depends on ARCH_TEGRA
help
IVC (Inter-VM Communication) protocol is part of the IPC
(Inter Processor Communication) framework on Tegra. It maintains the
data and the different commuication channels in SysRAM or RAM and
keeps the content is synchronization between host CPU and remote
processors.
config TEGRA_BPMP
bool "Tegra BPMP driver"
depends on ARCH_TEGRA && TEGRA_HSP_MBOX && TEGRA_IVC
help
BPMP (Boot and Power Management Processor) is designed to off-loading
the PM functions which include clock/DVFS/thermal/power from the CPU.
It needs HSP as the HW synchronization and notification module and
IVC module as the message communication protocol.
This driver manages the IPC interface between host CPU and the
firmware running on BPMP.
endmenu
obj-$(CONFIG_TEGRA_BPMP) += bpmp.o
obj-$(CONFIG_TEGRA_IVC) += ivc.o
此差异已折叠。
/*
* Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#include <soc/tegra/ivc.h>
#define TEGRA_IVC_ALIGN 64
/*
* IVC channel reset protocol.
*
* Each end uses its tx_channel.state to indicate its synchronization state.
*/
enum tegra_ivc_state {
/*
* This value is zero for backwards compatibility with services that
* assume channels to be initially zeroed. Such channels are in an
* initially valid state, but cannot be asynchronously reset, and must
* maintain a valid state at all times.
*
* The transmitting end can enter the established state from the sync or
* ack state when it observes the receiving endpoint in the ack or
* established state, indicating that has cleared the counters in our
* rx_channel.
*/
TEGRA_IVC_STATE_ESTABLISHED = 0,
/*
* If an endpoint is observed in the sync state, the remote endpoint is
* allowed to clear the counters it owns asynchronously with respect to
* the current endpoint. Therefore, the current endpoint is no longer
* allowed to communicate.
*/
TEGRA_IVC_STATE_SYNC,
/*
* When the transmitting end observes the receiving end in the sync
* state, it can clear the w_count and r_count and transition to the ack
* state. If the remote endpoint observes us in the ack state, it can
* return to the established state once it has cleared its counters.
*/
TEGRA_IVC_STATE_ACK
};
/*
* This structure is divided into two-cache aligned parts, the first is only
* written through the tx.channel pointer, while the second is only written
* through the rx.channel pointer. This delineates ownership of the cache
* lines, which is critical to performance and necessary in non-cache coherent
* implementations.
*/
struct tegra_ivc_header {
union {
struct {
/* fields owned by the transmitting end */
u32 count;
u32 state;
};
u8 pad[TEGRA_IVC_ALIGN];
} tx;
union {
/* fields owned by the receiving end */
u32 count;
u8 pad[TEGRA_IVC_ALIGN];
} rx;
};
static inline void tegra_ivc_invalidate(struct tegra_ivc *ivc, dma_addr_t phys)
{
if (!ivc->peer)
return;
dma_sync_single_for_cpu(ivc->peer, phys, TEGRA_IVC_ALIGN,
DMA_FROM_DEVICE);
}
static inline void tegra_ivc_flush(struct tegra_ivc *ivc, dma_addr_t phys)
{
if (!ivc->peer)
return;
dma_sync_single_for_device(ivc->peer, phys, TEGRA_IVC_ALIGN,
DMA_TO_DEVICE);
}
static inline bool tegra_ivc_empty(struct tegra_ivc *ivc,
struct tegra_ivc_header *header)
{
/*
* This function performs multiple checks on the same values with
* security implications, so create snapshots with ACCESS_ONCE() to
* ensure that these checks use the same values.
*/
u32 tx = ACCESS_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count);
/*
* Perform an over-full check to prevent denial of service attacks
* where a server could be easily fooled into believing that there's
* an extremely large number of frames ready, since receivers are not
* expected to check for full or over-full conditions.
*
* Although the channel isn't empty, this is an invalid case caused by
* a potentially malicious peer, so returning empty is safer, because
* it gives the impression that the channel has gone silent.
*/
if (tx - rx > ivc->num_frames)
return true;
return tx == rx;
}
static inline bool tegra_ivc_full(struct tegra_ivc *ivc,
struct tegra_ivc_header *header)
{
u32 tx = ACCESS_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count);
/*
* Invalid cases where the counters indicate that the queue is over
* capacity also appear full.
*/
return tx - rx >= ivc->num_frames;
}
static inline u32 tegra_ivc_available(struct tegra_ivc *ivc,
struct tegra_ivc_header *header)
{
u32 tx = ACCESS_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count);
/*
* This function isn't expected to be used in scenarios where an
* over-full situation can lead to denial of service attacks. See the
* comment in tegra_ivc_empty() for an explanation about special
* over-full considerations.
*/
return tx - rx;
}
static inline void tegra_ivc_advance_tx(struct tegra_ivc *ivc)
{
ACCESS_ONCE(ivc->tx.channel->tx.count) =
ACCESS_ONCE(ivc->tx.channel->tx.count) + 1;
if (ivc->tx.position == ivc->num_frames - 1)
ivc->tx.position = 0;
else
ivc->tx.position++;
}
static inline void tegra_ivc_advance_rx(struct tegra_ivc *ivc)
{
ACCESS_ONCE(ivc->rx.channel->rx.count) =
ACCESS_ONCE(ivc->rx.channel->rx.count) + 1;
if (ivc->rx.position == ivc->num_frames - 1)
ivc->rx.position = 0;
else
ivc->rx.position++;
}
static inline int tegra_ivc_check_read(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* tx.channel->state is set locally, so it is not synchronized with
* state from the remote peer. The remote peer cannot reset its
* transmit counters until we've acknowledged its synchronization
* request, so no additional synchronization is required because an
* asynchronous transition of rx.channel->state to
* TEGRA_IVC_STATE_ACK is not allowed.
*/
if (ivc->tx.channel->tx.state != TEGRA_IVC_STATE_ESTABLISHED)
return -ECONNRESET;
/*
* Avoid unnecessary invalidations when performing repeated accesses
* to an IVC channel by checking the old queue pointers first.
*
* Synchronization is only necessary when these pointers indicate
* empty or full.
*/
if (!tegra_ivc_empty(ivc, ivc->rx.channel))
return 0;
tegra_ivc_invalidate(ivc, ivc->rx.phys + offset);
if (tegra_ivc_empty(ivc, ivc->rx.channel))
return -ENOSPC;
return 0;
}
static inline int tegra_ivc_check_write(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, rx.count);
if (ivc->tx.channel->tx.state != TEGRA_IVC_STATE_ESTABLISHED)
return -ECONNRESET;
if (!tegra_ivc_full(ivc, ivc->tx.channel))
return 0;
tegra_ivc_invalidate(ivc, ivc->tx.phys + offset);
if (tegra_ivc_full(ivc, ivc->tx.channel))
return -ENOSPC;
return 0;
}
static void *tegra_ivc_frame_virt(struct tegra_ivc *ivc,
struct tegra_ivc_header *header,
unsigned int frame)
{
if (WARN_ON(frame >= ivc->num_frames))
return ERR_PTR(-EINVAL);
return (void *)(header + 1) + ivc->frame_size * frame;
}
static inline dma_addr_t tegra_ivc_frame_phys(struct tegra_ivc *ivc,
dma_addr_t phys,
unsigned int frame)
{
unsigned long offset;
offset = sizeof(struct tegra_ivc_header) + ivc->frame_size * frame;
return phys + offset;
}
static inline void tegra_ivc_invalidate_frame(struct tegra_ivc *ivc,
dma_addr_t phys,
unsigned int frame,
unsigned int offset,
size_t size)
{
if (!ivc->peer || WARN_ON(frame >= ivc->num_frames))
return;
phys = tegra_ivc_frame_phys(ivc, phys, frame) + offset;
dma_sync_single_for_cpu(ivc->peer, phys, size, DMA_FROM_DEVICE);
}
static inline void tegra_ivc_flush_frame(struct tegra_ivc *ivc,
dma_addr_t phys,
unsigned int frame,
unsigned int offset,
size_t size)
{
if (!ivc->peer || WARN_ON(frame >= ivc->num_frames))
return;
phys = tegra_ivc_frame_phys(ivc, phys, frame) + offset;
dma_sync_single_for_device(ivc->peer, phys, size, DMA_TO_DEVICE);
}
/* directly peek at the next frame rx'ed */
void *tegra_ivc_read_get_next_frame(struct tegra_ivc *ivc)
{
int err;
if (WARN_ON(ivc == NULL))
return ERR_PTR(-EINVAL);
err = tegra_ivc_check_read(ivc);
if (err < 0)
return ERR_PTR(err);
/*
* Order observation of ivc->rx.position potentially indicating new
* data before data read.
*/
smp_rmb();
tegra_ivc_invalidate_frame(ivc, ivc->rx.phys, ivc->rx.position, 0,
ivc->frame_size);
return tegra_ivc_frame_virt(ivc, ivc->rx.channel, ivc->rx.position);
}
EXPORT_SYMBOL(tegra_ivc_read_get_next_frame);
int tegra_ivc_read_advance(struct tegra_ivc *ivc)
{
unsigned int rx = offsetof(struct tegra_ivc_header, rx.count);
unsigned int tx = offsetof(struct tegra_ivc_header, tx.count);
int err;
/*
* No read barriers or synchronization here: the caller is expected to
* have already observed the channel non-empty. This check is just to
* catch programming errors.
*/
err = tegra_ivc_check_read(ivc);
if (err < 0)
return err;
tegra_ivc_advance_rx(ivc);
tegra_ivc_flush(ivc, ivc->rx.phys + rx);
/*
* Ensure our write to ivc->rx.position occurs before our read from
* ivc->tx.position.
*/
smp_mb();
/*
* Notify only upon transition from full to non-full. The available
* count can only asynchronously increase, so the worst possible
* side-effect will be a spurious notification.
*/
tegra_ivc_invalidate(ivc, ivc->rx.phys + tx);
if (tegra_ivc_available(ivc, ivc->rx.channel) == ivc->num_frames - 1)
ivc->notify(ivc, ivc->notify_data);
return 0;
}
EXPORT_SYMBOL(tegra_ivc_read_advance);
/* directly poke at the next frame to be tx'ed */
void *tegra_ivc_write_get_next_frame(struct tegra_ivc *ivc)
{
int err;
err = tegra_ivc_check_write(ivc);
if (err < 0)
return ERR_PTR(err);
return tegra_ivc_frame_virt(ivc, ivc->tx.channel, ivc->tx.position);
}
EXPORT_SYMBOL(tegra_ivc_write_get_next_frame);
/* advance the tx buffer */
int tegra_ivc_write_advance(struct tegra_ivc *ivc)
{
unsigned int tx = offsetof(struct tegra_ivc_header, tx.count);
unsigned int rx = offsetof(struct tegra_ivc_header, rx.count);
int err;
err = tegra_ivc_check_write(ivc);
if (err < 0)
return err;
tegra_ivc_flush_frame(ivc, ivc->tx.phys, ivc->tx.position, 0,
ivc->frame_size);
/*
* Order any possible stores to the frame before update of
* ivc->tx.position.
*/
smp_wmb();
tegra_ivc_advance_tx(ivc);
tegra_ivc_flush(ivc, ivc->tx.phys + tx);
/*
* Ensure our write to ivc->tx.position occurs before our read from
* ivc->rx.position.
*/
smp_mb();
/*
* Notify only upon transition from empty to non-empty. The available
* count can only asynchronously decrease, so the worst possible
* side-effect will be a spurious notification.
*/
tegra_ivc_invalidate(ivc, ivc->tx.phys + rx);
if (tegra_ivc_available(ivc, ivc->tx.channel) == 1)
ivc->notify(ivc, ivc->notify_data);
return 0;
}
EXPORT_SYMBOL(tegra_ivc_write_advance);
void tegra_ivc_reset(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, tx.count);
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_SYNC;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
ivc->notify(ivc, ivc->notify_data);
}
EXPORT_SYMBOL(tegra_ivc_reset);
/*
* =======================================================
* IVC State Transition Table - see tegra_ivc_notified()
* =======================================================
*
* local remote action
* ----- ------ -----------------------------------
* SYNC EST <none>
* SYNC ACK reset counters; move to EST; notify
* SYNC SYNC reset counters; move to ACK; notify
* ACK EST move to EST; notify
* ACK ACK move to EST; notify
* ACK SYNC reset counters; move to ACK; notify
* EST EST <none>
* EST ACK <none>
* EST SYNC reset counters; move to ACK; notify
*
* ===============================================================
*/
int tegra_ivc_notified(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, tx.count);
enum tegra_ivc_state state;
/* Copy the receiver's state out of shared memory. */
tegra_ivc_invalidate(ivc, ivc->rx.phys + offset);
state = ACCESS_ONCE(ivc->rx.channel->tx.state);
if (state == TEGRA_IVC_STATE_SYNC) {
offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* Order observation of TEGRA_IVC_STATE_SYNC before stores
* clearing tx.channel.
*/
smp_rmb();
/*
* Reset tx.channel counters. The remote end is in the SYNC
* state and won't make progress until we change our state,
* so the counters are not in use at this time.
*/
ivc->tx.channel->tx.count = 0;
ivc->rx.channel->rx.count = 0;
ivc->tx.position = 0;
ivc->rx.position = 0;
/*
* Ensure that counters appear cleared before new state can be
* observed.
*/
smp_wmb();
/*
* Move to ACK state. We have just cleared our counters, so it
* is now safe for the remote end to start using these values.
*/
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_ACK;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
/*
* Notify remote end to observe state transition.
*/
ivc->notify(ivc, ivc->notify_data);
} else if (ivc->tx.channel->tx.state == TEGRA_IVC_STATE_SYNC &&
state == TEGRA_IVC_STATE_ACK) {
offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* Order observation of ivc_state_sync before stores clearing
* tx_channel.
*/
smp_rmb();
/*
* Reset tx.channel counters. The remote end is in the ACK
* state and won't make progress until we change our state,
* so the counters are not in use at this time.
*/
ivc->tx.channel->tx.count = 0;
ivc->rx.channel->rx.count = 0;
ivc->tx.position = 0;
ivc->rx.position = 0;
/*
* Ensure that counters appear cleared before new state can be
* observed.
*/
smp_wmb();
/*
* Move to ESTABLISHED state. We know that the remote end has
* already cleared its counters, so it is safe to start
* writing/reading on this channel.
*/
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_ESTABLISHED;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
/*
* Notify remote end to observe state transition.
*/
ivc->notify(ivc, ivc->notify_data);
} else if (ivc->tx.channel->tx.state == TEGRA_IVC_STATE_ACK) {
offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* At this point, we have observed the peer to be in either
* the ACK or ESTABLISHED state. Next, order observation of
* peer state before storing to tx.channel.
*/
smp_rmb();
/*
* Move to ESTABLISHED state. We know that we have previously
* cleared our counters, and we know that the remote end has
* cleared its counters, so it is safe to start writing/reading
* on this channel.
*/
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_ESTABLISHED;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
/*
* Notify remote end to observe state transition.
*/
ivc->notify(ivc, ivc->notify_data);
} else {
/*
* There is no need to handle any further action. Either the
* channel is already fully established, or we are waiting for
* the remote end to catch up with our current state. Refer
* to the diagram in "IVC State Transition Table" above.
*/
}
if (ivc->tx.channel->tx.state != TEGRA_IVC_STATE_ESTABLISHED)
return -EAGAIN;
return 0;
}
EXPORT_SYMBOL(tegra_ivc_notified);
size_t tegra_ivc_align(size_t size)
{
return ALIGN(size, TEGRA_IVC_ALIGN);
}
EXPORT_SYMBOL(tegra_ivc_align);
unsigned tegra_ivc_total_queue_size(unsigned queue_size)
{
if (!IS_ALIGNED(queue_size, TEGRA_IVC_ALIGN)) {
pr_err("%s: queue_size (%u) must be %u-byte aligned\n",
__func__, queue_size, TEGRA_IVC_ALIGN);
return 0;
}
return queue_size + sizeof(struct tegra_ivc_header);
}
EXPORT_SYMBOL(tegra_ivc_total_queue_size);
static int tegra_ivc_check_params(unsigned long rx, unsigned long tx,
unsigned int num_frames, size_t frame_size)
{
BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct tegra_ivc_header, tx.count),
TEGRA_IVC_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct tegra_ivc_header, rx.count),
TEGRA_IVC_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(sizeof(struct tegra_ivc_header),
TEGRA_IVC_ALIGN));
if ((uint64_t)num_frames * (uint64_t)frame_size >= 0x100000000UL) {
pr_err("num_frames * frame_size overflows\n");
return -EINVAL;
}
if (!IS_ALIGNED(frame_size, TEGRA_IVC_ALIGN)) {
pr_err("frame size not adequately aligned: %zu\n", frame_size);
return -EINVAL;
}
/*
* The headers must at least be aligned enough for counters
* to be accessed atomically.
*/
if (!IS_ALIGNED(rx, TEGRA_IVC_ALIGN)) {
pr_err("IVC channel start not aligned: %#lx\n", rx);
return -EINVAL;
}
if (!IS_ALIGNED(tx, TEGRA_IVC_ALIGN)) {
pr_err("IVC channel start not aligned: %#lx\n", tx);
return -EINVAL;
}
if (rx < tx) {
if (rx + frame_size * num_frames > tx) {
pr_err("queue regions overlap: %#lx + %zx > %#lx\n",
rx, frame_size * num_frames, tx);
return -EINVAL;
}
} else {
if (tx + frame_size * num_frames > rx) {
pr_err("queue regions overlap: %#lx + %zx > %#lx\n",
tx, frame_size * num_frames, rx);
return -EINVAL;
}
}
return 0;
}
int tegra_ivc_init(struct tegra_ivc *ivc, struct device *peer, void *rx,
dma_addr_t rx_phys, void *tx, dma_addr_t tx_phys,
unsigned int num_frames, size_t frame_size,
void (*notify)(struct tegra_ivc *ivc, void *data),
void *data)
{
size_t queue_size;
int err;
if (WARN_ON(!ivc || !notify))
return -EINVAL;
/*
* All sizes that can be returned by communication functions should
* fit in an int.
*/
if (frame_size > INT_MAX)
return -E2BIG;
err = tegra_ivc_check_params((unsigned long)rx, (unsigned long)tx,
num_frames, frame_size);
if (err < 0)
return err;
queue_size = tegra_ivc_total_queue_size(num_frames * frame_size);
if (peer) {
ivc->rx.phys = dma_map_single(peer, rx, queue_size,
DMA_BIDIRECTIONAL);
if (ivc->rx.phys == DMA_ERROR_CODE)
return -ENOMEM;
ivc->tx.phys = dma_map_single(peer, tx, queue_size,
DMA_BIDIRECTIONAL);
if (ivc->tx.phys == DMA_ERROR_CODE) {
dma_unmap_single(peer, ivc->rx.phys, queue_size,
DMA_BIDIRECTIONAL);
return -ENOMEM;
}
} else {
ivc->rx.phys = rx_phys;
ivc->tx.phys = tx_phys;
}
ivc->rx.channel = rx;
ivc->tx.channel = tx;
ivc->peer = peer;
ivc->notify = notify;
ivc->notify_data = data;
ivc->frame_size = frame_size;
ivc->num_frames = num_frames;
/*
* These values aren't necessarily correct until the channel has been
* reset.
*/
ivc->tx.position = 0;
ivc->rx.position = 0;
return 0;
}
EXPORT_SYMBOL(tegra_ivc_init);
void tegra_ivc_cleanup(struct tegra_ivc *ivc)
{
if (ivc->peer) {
size_t size = tegra_ivc_total_queue_size(ivc->num_frames *
ivc->frame_size);
dma_unmap_single(ivc->peer, ivc->rx.phys, size,
DMA_BIDIRECTIONAL);
dma_unmap_single(ivc->peer, ivc->tx.phys, size,
DMA_BIDIRECTIONAL);
}
}
EXPORT_SYMBOL(tegra_ivc_cleanup);
此差异已折叠。
/*
* Texas Instruments System Control Interface (TISCI) Protocol
*
* Communication protocol with TI SCI hardware
* The system works in a message response protocol
* See: http://processors.wiki.ti.com/index.php/TISCI for details
*
* Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the
* distribution.
*
* Neither the name of Texas Instruments Incorporated nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#ifndef __TI_SCI_H
#define __TI_SCI_H
/* Generic Messages */
#define TI_SCI_MSG_ENABLE_WDT 0x0000
#define TI_SCI_MSG_WAKE_RESET 0x0001
#define TI_SCI_MSG_VERSION 0x0002
#define TI_SCI_MSG_WAKE_REASON 0x0003
#define TI_SCI_MSG_GOODBYE 0x0004
#define TI_SCI_MSG_SYS_RESET 0x0005
/* Device requests */
#define TI_SCI_MSG_SET_DEVICE_STATE 0x0200
#define TI_SCI_MSG_GET_DEVICE_STATE 0x0201
#define TI_SCI_MSG_SET_DEVICE_RESETS 0x0202
/* Clock requests */
#define TI_SCI_MSG_SET_CLOCK_STATE 0x0100
#define TI_SCI_MSG_GET_CLOCK_STATE 0x0101
#define TI_SCI_MSG_SET_CLOCK_PARENT 0x0102
#define TI_SCI_MSG_GET_CLOCK_PARENT 0x0103
#define TI_SCI_MSG_GET_NUM_CLOCK_PARENTS 0x0104
#define TI_SCI_MSG_SET_CLOCK_FREQ 0x010c
#define TI_SCI_MSG_QUERY_CLOCK_FREQ 0x010d
#define TI_SCI_MSG_GET_CLOCK_FREQ 0x010e
/**
* struct ti_sci_msg_hdr - Generic Message Header for All messages and responses
* @type: Type of messages: One of TI_SCI_MSG* values
* @host: Host of the message
* @seq: Message identifier indicating a transfer sequence
* @flags: Flag for the message
*/
struct ti_sci_msg_hdr {
u16 type;
u8 host;
u8 seq;
#define TI_SCI_MSG_FLAG(val) (1 << (val))
#define TI_SCI_FLAG_REQ_GENERIC_NORESPONSE 0x0
#define TI_SCI_FLAG_REQ_ACK_ON_RECEIVED TI_SCI_MSG_FLAG(0)
#define TI_SCI_FLAG_REQ_ACK_ON_PROCESSED TI_SCI_MSG_FLAG(1)
#define TI_SCI_FLAG_RESP_GENERIC_NACK 0x0
#define TI_SCI_FLAG_RESP_GENERIC_ACK TI_SCI_MSG_FLAG(1)
/* Additional Flags */
u32 flags;
} __packed;
/**
* struct ti_sci_msg_resp_version - Response for a message
* @hdr: Generic header
* @firmware_description: String describing the firmware
* @firmware_revision: Firmware revision
* @abi_major: Major version of the ABI that firmware supports
* @abi_minor: Minor version of the ABI that firmware supports
*
* In general, ABI version changes follow the rule that minor version increments
* are backward compatible. Major revision changes in ABI may not be
* backward compatible.
*
* Response to a generic message with message type TI_SCI_MSG_VERSION
*/
struct ti_sci_msg_resp_version {
struct ti_sci_msg_hdr hdr;
char firmware_description[32];
u16 firmware_revision;
u8 abi_major;
u8 abi_minor;
} __packed;
/**
* struct ti_sci_msg_req_reboot - Reboot the SoC
* @hdr: Generic Header
*
* Request type is TI_SCI_MSG_SYS_RESET, responded with a generic
* ACK/NACK message.
*/
struct ti_sci_msg_req_reboot {
struct ti_sci_msg_hdr hdr;
} __packed;
/**
* struct ti_sci_msg_req_set_device_state - Set the desired state of the device
* @hdr: Generic header
* @id: Indicates which device to modify
* @reserved: Reserved space in message, must be 0 for backward compatibility
* @state: The desired state of the device.
*
* Certain flags can also be set to alter the device state:
* + MSG_FLAG_DEVICE_WAKE_ENABLED - Configure the device to be a wake source.
* The meaning of this flag will vary slightly from device to device and from
* SoC to SoC but it generally allows the device to wake the SoC out of deep
* suspend states.
* + MSG_FLAG_DEVICE_RESET_ISO - Enable reset isolation for this device.
* + MSG_FLAG_DEVICE_EXCLUSIVE - Claim this device exclusively. When passed
* with STATE_RETENTION or STATE_ON, it will claim the device exclusively.
* If another host already has this device set to STATE_RETENTION or STATE_ON,
* the message will fail. Once successful, other hosts attempting to set
* STATE_RETENTION or STATE_ON will fail.
*
* Request type is TI_SCI_MSG_SET_DEVICE_STATE, responded with a generic
* ACK/NACK message.
*/
struct ti_sci_msg_req_set_device_state {
/* Additional hdr->flags options */
#define MSG_FLAG_DEVICE_WAKE_ENABLED TI_SCI_MSG_FLAG(8)
#define MSG_FLAG_DEVICE_RESET_ISO TI_SCI_MSG_FLAG(9)
#define MSG_FLAG_DEVICE_EXCLUSIVE TI_SCI_MSG_FLAG(10)
struct ti_sci_msg_hdr hdr;
u32 id;
u32 reserved;
#define MSG_DEVICE_SW_STATE_AUTO_OFF 0
#define MSG_DEVICE_SW_STATE_RETENTION 1
#define MSG_DEVICE_SW_STATE_ON 2
u8 state;
} __packed;
/**
* struct ti_sci_msg_req_get_device_state - Request to get device.
* @hdr: Generic header
* @id: Device Identifier
*
* Request type is TI_SCI_MSG_GET_DEVICE_STATE, responded device state
* information
*/
struct ti_sci_msg_req_get_device_state {
struct ti_sci_msg_hdr hdr;
u32 id;
} __packed;
/**
* struct ti_sci_msg_resp_get_device_state - Response to get device request.
* @hdr: Generic header
* @context_loss_count: Indicates how many times the device has lost context. A
* driver can use this monotonic counter to determine if the device has
* lost context since the last time this message was exchanged.
* @resets: Programmed state of the reset lines.
* @programmed_state: The state as programmed by set_device.
* - Uses the MSG_DEVICE_SW_* macros
* @current_state: The actual state of the hardware.
*
* Response to request TI_SCI_MSG_GET_DEVICE_STATE.
*/
struct ti_sci_msg_resp_get_device_state {
struct ti_sci_msg_hdr hdr;
u32 context_loss_count;
u32 resets;
u8 programmed_state;
#define MSG_DEVICE_HW_STATE_OFF 0
#define MSG_DEVICE_HW_STATE_ON 1
#define MSG_DEVICE_HW_STATE_TRANS 2
u8 current_state;
} __packed;
/**
* struct ti_sci_msg_req_set_device_resets - Set the desired resets
* configuration of the device
* @hdr: Generic header
* @id: Indicates which device to modify
* @resets: A bit field of resets for the device. The meaning, behavior,
* and usage of the reset flags are device specific. 0 for a bit
* indicates releasing the reset represented by that bit while 1
* indicates keeping it held.
*
* Request type is TI_SCI_MSG_SET_DEVICE_RESETS, responded with a generic
* ACK/NACK message.
*/
struct ti_sci_msg_req_set_device_resets {
struct ti_sci_msg_hdr hdr;
u32 id;
u32 resets;
} __packed;
/**
* struct ti_sci_msg_req_set_clock_state - Request to setup a Clock state
* @hdr: Generic Header, Certain flags can be set specific to the clocks:
* MSG_FLAG_CLOCK_ALLOW_SSC: Allow this clock to be modified
* via spread spectrum clocking.
* MSG_FLAG_CLOCK_ALLOW_FREQ_CHANGE: Allow this clock's
* frequency to be changed while it is running so long as it
* is within the min/max limits.
* MSG_FLAG_CLOCK_INPUT_TERM: Enable input termination, this
* is only applicable to clock inputs on the SoC pseudo-device.
* @dev_id: Device identifier this request is for
* @clk_id: Clock identifier for the device for this request.
* Each device has it's own set of clock inputs. This indexes
* which clock input to modify.
* @request_state: Request the state for the clock to be set to.
* MSG_CLOCK_SW_STATE_UNREQ: The IP does not require this clock,
* it can be disabled, regardless of the state of the device
* MSG_CLOCK_SW_STATE_AUTO: Allow the System Controller to
* automatically manage the state of this clock. If the device
* is enabled, then the clock is enabled. If the device is set
* to off or retention, then the clock is internally set as not
* being required by the device.(default)
* MSG_CLOCK_SW_STATE_REQ: Configure the clock to be enabled,
* regardless of the state of the device.
*
* Normally, all required clocks are managed by TISCI entity, this is used
* only for specific control *IF* required. Auto managed state is
* MSG_CLOCK_SW_STATE_AUTO, in other states, TISCI entity assume remote
* will explicitly control.
*
* Request type is TI_SCI_MSG_SET_CLOCK_STATE, response is a generic
* ACK or NACK message.
*/
struct ti_sci_msg_req_set_clock_state {
/* Additional hdr->flags options */
#define MSG_FLAG_CLOCK_ALLOW_SSC TI_SCI_MSG_FLAG(8)
#define MSG_FLAG_CLOCK_ALLOW_FREQ_CHANGE TI_SCI_MSG_FLAG(9)
#define MSG_FLAG_CLOCK_INPUT_TERM TI_SCI_MSG_FLAG(10)
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u8 clk_id;
#define MSG_CLOCK_SW_STATE_UNREQ 0
#define MSG_CLOCK_SW_STATE_AUTO 1
#define MSG_CLOCK_SW_STATE_REQ 2
u8 request_state;
} __packed;
/**
* struct ti_sci_msg_req_get_clock_state - Request for clock state
* @hdr: Generic Header
* @dev_id: Device identifier this request is for
* @clk_id: Clock identifier for the device for this request.
* Each device has it's own set of clock inputs. This indexes
* which clock input to get state of.
*
* Request type is TI_SCI_MSG_GET_CLOCK_STATE, response is state
* of the clock
*/
struct ti_sci_msg_req_get_clock_state {
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u8 clk_id;
} __packed;
/**
* struct ti_sci_msg_resp_get_clock_state - Response to get clock state
* @hdr: Generic Header
* @programmed_state: Any programmed state of the clock. This is one of
* MSG_CLOCK_SW_STATE* values.
* @current_state: Current state of the clock. This is one of:
* MSG_CLOCK_HW_STATE_NOT_READY: Clock is not ready
* MSG_CLOCK_HW_STATE_READY: Clock is ready
*
* Response to TI_SCI_MSG_GET_CLOCK_STATE.
*/
struct ti_sci_msg_resp_get_clock_state {
struct ti_sci_msg_hdr hdr;
u8 programmed_state;
#define MSG_CLOCK_HW_STATE_NOT_READY 0
#define MSG_CLOCK_HW_STATE_READY 1
u8 current_state;
} __packed;
/**
* struct ti_sci_msg_req_set_clock_parent - Set the clock parent
* @hdr: Generic Header
* @dev_id: Device identifier this request is for
* @clk_id: Clock identifier for the device for this request.
* Each device has it's own set of clock inputs. This indexes
* which clock input to modify.
* @parent_id: The new clock parent is selectable by an index via this
* parameter.
*
* Request type is TI_SCI_MSG_SET_CLOCK_PARENT, response is generic
* ACK / NACK message.
*/
struct ti_sci_msg_req_set_clock_parent {
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u8 clk_id;
u8 parent_id;
} __packed;
/**
* struct ti_sci_msg_req_get_clock_parent - Get the clock parent
* @hdr: Generic Header
* @dev_id: Device identifier this request is for
* @clk_id: Clock identifier for the device for this request.
* Each device has it's own set of clock inputs. This indexes
* which clock input to get the parent for.
*
* Request type is TI_SCI_MSG_GET_CLOCK_PARENT, response is parent information
*/
struct ti_sci_msg_req_get_clock_parent {
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u8 clk_id;
} __packed;
/**
* struct ti_sci_msg_resp_get_clock_parent - Response with clock parent
* @hdr: Generic Header
* @parent_id: The current clock parent
*
* Response to TI_SCI_MSG_GET_CLOCK_PARENT.
*/
struct ti_sci_msg_resp_get_clock_parent {
struct ti_sci_msg_hdr hdr;
u8 parent_id;
} __packed;
/**
* struct ti_sci_msg_req_get_clock_num_parents - Request to get clock parents
* @hdr: Generic header
* @dev_id: Device identifier this request is for
* @clk_id: Clock identifier for the device for this request.
*
* This request provides information about how many clock parent options
* are available for a given clock to a device. This is typically used
* for input clocks.
*
* Request type is TI_SCI_MSG_GET_NUM_CLOCK_PARENTS, response is appropriate
* message, or NACK in case of inability to satisfy request.
*/
struct ti_sci_msg_req_get_clock_num_parents {
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u8 clk_id;
} __packed;
/**
* struct ti_sci_msg_resp_get_clock_num_parents - Response for get clk parents
* @hdr: Generic header
* @num_parents: Number of clock parents
*
* Response to TI_SCI_MSG_GET_NUM_CLOCK_PARENTS
*/
struct ti_sci_msg_resp_get_clock_num_parents {
struct ti_sci_msg_hdr hdr;
u8 num_parents;
} __packed;
/**
* struct ti_sci_msg_req_query_clock_freq - Request to query a frequency
* @hdr: Generic Header
* @dev_id: Device identifier this request is for
* @min_freq_hz: The minimum allowable frequency in Hz. This is the minimum
* allowable programmed frequency and does not account for clock
* tolerances and jitter.
* @target_freq_hz: The target clock frequency. A frequency will be found
* as close to this target frequency as possible.
* @max_freq_hz: The maximum allowable frequency in Hz. This is the maximum
* allowable programmed frequency and does not account for clock
* tolerances and jitter.
* @clk_id: Clock identifier for the device for this request.
*
* NOTE: Normally clock frequency management is automatically done by TISCI
* entity. In case of specific requests, TISCI evaluates capability to achieve
* requested frequency within provided range and responds with
* result message.
*
* Request type is TI_SCI_MSG_QUERY_CLOCK_FREQ, response is appropriate message,
* or NACK in case of inability to satisfy request.
*/
struct ti_sci_msg_req_query_clock_freq {
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u64 min_freq_hz;
u64 target_freq_hz;
u64 max_freq_hz;
u8 clk_id;
} __packed;
/**
* struct ti_sci_msg_resp_query_clock_freq - Response to a clock frequency query
* @hdr: Generic Header
* @freq_hz: Frequency that is the best match in Hz.
*
* Response to request type TI_SCI_MSG_QUERY_CLOCK_FREQ. NOTE: if the request
* cannot be satisfied, the message will be of type NACK.
*/
struct ti_sci_msg_resp_query_clock_freq {
struct ti_sci_msg_hdr hdr;
u64 freq_hz;
} __packed;
/**
* struct ti_sci_msg_req_set_clock_freq - Request to setup a clock frequency
* @hdr: Generic Header
* @dev_id: Device identifier this request is for
* @min_freq_hz: The minimum allowable frequency in Hz. This is the minimum
* allowable programmed frequency and does not account for clock
* tolerances and jitter.
* @target_freq_hz: The target clock frequency. The clock will be programmed
* at a rate as close to this target frequency as possible.
* @max_freq_hz: The maximum allowable frequency in Hz. This is the maximum
* allowable programmed frequency and does not account for clock
* tolerances and jitter.
* @clk_id: Clock identifier for the device for this request.
*
* NOTE: Normally clock frequency management is automatically done by TISCI
* entity. In case of specific requests, TISCI evaluates capability to achieve
* requested range and responds with success/failure message.
*
* This sets the desired frequency for a clock within an allowable
* range. This message will fail on an enabled clock unless
* MSG_FLAG_CLOCK_ALLOW_FREQ_CHANGE is set for the clock. Additionally,
* if other clocks have their frequency modified due to this message,
* they also must have the MSG_FLAG_CLOCK_ALLOW_FREQ_CHANGE or be disabled.
*
* Calling set frequency on a clock input to the SoC pseudo-device will
* inform the PMMC of that clock's frequency. Setting a frequency of
* zero will indicate the clock is disabled.
*
* Calling set frequency on clock outputs from the SoC pseudo-device will
* function similarly to setting the clock frequency on a device.
*
* Request type is TI_SCI_MSG_SET_CLOCK_FREQ, response is a generic ACK/NACK
* message.
*/
struct ti_sci_msg_req_set_clock_freq {
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u64 min_freq_hz;
u64 target_freq_hz;
u64 max_freq_hz;
u8 clk_id;
} __packed;
/**
* struct ti_sci_msg_req_get_clock_freq - Request to get the clock frequency
* @hdr: Generic Header
* @dev_id: Device identifier this request is for
* @clk_id: Clock identifier for the device for this request.
*
* NOTE: Normally clock frequency management is automatically done by TISCI
* entity. In some cases, clock frequencies are configured by host.
*
* Request type is TI_SCI_MSG_GET_CLOCK_FREQ, responded with clock frequency
* that the clock is currently at.
*/
struct ti_sci_msg_req_get_clock_freq {
struct ti_sci_msg_hdr hdr;
u32 dev_id;
u8 clk_id;
} __packed;
/**
* struct ti_sci_msg_resp_get_clock_freq - Response of clock frequency request
* @hdr: Generic Header
* @freq_hz: Frequency that the clock is currently on, in Hz.
*
* Response to request type TI_SCI_MSG_GET_CLOCK_FREQ.
*/
struct ti_sci_msg_resp_get_clock_freq {
struct ti_sci_msg_hdr hdr;
u64 freq_hz;
} __packed;
#endif /* __TI_SCI_H */
......@@ -124,6 +124,15 @@ config MAILBOX_TEST
Test client to help with testing new Controller driver
implementations.
config TEGRA_HSP_MBOX
bool "Tegra HSP (Hardware Synchronization Primitives) Driver"
depends on ARCH_TEGRA_186_SOC
help
The Tegra HSP driver is used for the interprocessor communication
between different remote processors and host processors on Tegra186
and later SoCs. Say Y here if you want to have this support.
If unsure say N.
config XGENE_SLIMPRO_MBOX
tristate "APM SoC X-Gene SLIMpro Mailbox Controller"
depends on ARCH_XGENE
......
......@@ -29,3 +29,5 @@ obj-$(CONFIG_XGENE_SLIMPRO_MBOX) += mailbox-xgene-slimpro.o
obj-$(CONFIG_HI6220_MBOX) += hi6220-mailbox.o
obj-$(CONFIG_BCM_PDC_MBOX) += bcm-pdc-mailbox.o
obj-$(CONFIG_TEGRA_HSP_MBOX) += tegra-hsp.o
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/mailbox_controller.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <dt-bindings/mailbox/tegra186-hsp.h>
#define HSP_INT_DIMENSIONING 0x380
#define HSP_nSM_SHIFT 0
#define HSP_nSS_SHIFT 4
#define HSP_nAS_SHIFT 8
#define HSP_nDB_SHIFT 12
#define HSP_nSI_SHIFT 16
#define HSP_nINT_MASK 0xf
#define HSP_DB_TRIGGER 0x0
#define HSP_DB_ENABLE 0x4
#define HSP_DB_RAW 0x8
#define HSP_DB_PENDING 0xc
#define HSP_DB_CCPLEX 1
#define HSP_DB_BPMP 3
#define HSP_DB_MAX 7
struct tegra_hsp_channel;
struct tegra_hsp;
struct tegra_hsp_channel {
struct tegra_hsp *hsp;
struct mbox_chan *chan;
void __iomem *regs;
};
struct tegra_hsp_doorbell {
struct tegra_hsp_channel channel;
struct list_head list;
const char *name;
unsigned int master;
unsigned int index;
};
struct tegra_hsp_db_map {
const char *name;
unsigned int master;
unsigned int index;
};
struct tegra_hsp_soc {
const struct tegra_hsp_db_map *map;
};
struct tegra_hsp {
const struct tegra_hsp_soc *soc;
struct mbox_controller mbox;
void __iomem *regs;
unsigned int irq;
unsigned int num_sm;
unsigned int num_as;
unsigned int num_ss;
unsigned int num_db;
unsigned int num_si;
spinlock_t lock;
struct list_head doorbells;
};
static inline struct tegra_hsp *
to_tegra_hsp(struct mbox_controller *mbox)
{
return container_of(mbox, struct tegra_hsp, mbox);
}
static inline u32 tegra_hsp_readl(struct tegra_hsp *hsp, unsigned int offset)
{
return readl(hsp->regs + offset);
}
static inline void tegra_hsp_writel(struct tegra_hsp *hsp, u32 value,
unsigned int offset)
{
writel(value, hsp->regs + offset);
}
static inline u32 tegra_hsp_channel_readl(struct tegra_hsp_channel *channel,
unsigned int offset)
{
return readl(channel->regs + offset);
}
static inline void tegra_hsp_channel_writel(struct tegra_hsp_channel *channel,
u32 value, unsigned int offset)
{
writel(value, channel->regs + offset);
}
static bool tegra_hsp_doorbell_can_ring(struct tegra_hsp_doorbell *db)
{
u32 value;
value = tegra_hsp_channel_readl(&db->channel, HSP_DB_ENABLE);
return (value & BIT(TEGRA_HSP_DB_MASTER_CCPLEX)) != 0;
}
static struct tegra_hsp_doorbell *
__tegra_hsp_doorbell_get(struct tegra_hsp *hsp, unsigned int master)
{
struct tegra_hsp_doorbell *entry;
list_for_each_entry(entry, &hsp->doorbells, list)
if (entry->master == master)
return entry;
return NULL;
}
static struct tegra_hsp_doorbell *
tegra_hsp_doorbell_get(struct tegra_hsp *hsp, unsigned int master)
{
struct tegra_hsp_doorbell *db;
unsigned long flags;
spin_lock_irqsave(&hsp->lock, flags);
db = __tegra_hsp_doorbell_get(hsp, master);
spin_unlock_irqrestore(&hsp->lock, flags);
return db;
}
static irqreturn_t tegra_hsp_doorbell_irq(int irq, void *data)
{
struct tegra_hsp *hsp = data;
struct tegra_hsp_doorbell *db;
unsigned long master, value;
db = tegra_hsp_doorbell_get(hsp, TEGRA_HSP_DB_MASTER_CCPLEX);
if (!db)
return IRQ_NONE;
value = tegra_hsp_channel_readl(&db->channel, HSP_DB_PENDING);
tegra_hsp_channel_writel(&db->channel, value, HSP_DB_PENDING);
spin_lock(&hsp->lock);
for_each_set_bit(master, &value, hsp->mbox.num_chans) {
struct tegra_hsp_doorbell *db;
db = __tegra_hsp_doorbell_get(hsp, master);
/*
* Depending on the bootloader chain, the CCPLEX doorbell will
* have some doorbells enabled, which means that requesting an
* interrupt will immediately fire.
*
* In that case, db->channel.chan will still be NULL here and
* cause a crash if not properly guarded.
*
* It remains to be seen if ignoring the doorbell in that case
* is the correct solution.
*/
if (db && db->channel.chan)
mbox_chan_received_data(db->channel.chan, NULL);
}
spin_unlock(&hsp->lock);
return IRQ_HANDLED;
}
static struct tegra_hsp_channel *
tegra_hsp_doorbell_create(struct tegra_hsp *hsp, const char *name,
unsigned int master, unsigned int index)
{
struct tegra_hsp_doorbell *db;
unsigned int offset;
unsigned long flags;
db = kzalloc(sizeof(*db), GFP_KERNEL);
if (!db)
return ERR_PTR(-ENOMEM);
offset = (1 + (hsp->num_sm / 2) + hsp->num_ss + hsp->num_as) << 16;
offset += index * 0x100;
db->channel.regs = hsp->regs + offset;
db->channel.hsp = hsp;
db->name = kstrdup_const(name, GFP_KERNEL);
db->master = master;
db->index = index;
spin_lock_irqsave(&hsp->lock, flags);
list_add_tail(&db->list, &hsp->doorbells);
spin_unlock_irqrestore(&hsp->lock, flags);
return &db->channel;
}
static void __tegra_hsp_doorbell_destroy(struct tegra_hsp_doorbell *db)
{
list_del(&db->list);
kfree_const(db->name);
kfree(db);
}
static int tegra_hsp_doorbell_send_data(struct mbox_chan *chan, void *data)
{
struct tegra_hsp_doorbell *db = chan->con_priv;
tegra_hsp_channel_writel(&db->channel, 1, HSP_DB_TRIGGER);
return 0;
}
static int tegra_hsp_doorbell_startup(struct mbox_chan *chan)
{
struct tegra_hsp_doorbell *db = chan->con_priv;
struct tegra_hsp *hsp = db->channel.hsp;
struct tegra_hsp_doorbell *ccplex;
unsigned long flags;
u32 value;
if (db->master >= hsp->mbox.num_chans) {
dev_err(hsp->mbox.dev,
"invalid master ID %u for HSP channel\n",
db->master);
return -EINVAL;
}
ccplex = tegra_hsp_doorbell_get(hsp, TEGRA_HSP_DB_MASTER_CCPLEX);
if (!ccplex)
return -ENODEV;
if (!tegra_hsp_doorbell_can_ring(db))
return -ENODEV;
spin_lock_irqsave(&hsp->lock, flags);
value = tegra_hsp_channel_readl(&ccplex->channel, HSP_DB_ENABLE);
value |= BIT(db->master);
tegra_hsp_channel_writel(&ccplex->channel, value, HSP_DB_ENABLE);
spin_unlock_irqrestore(&hsp->lock, flags);
return 0;
}
static void tegra_hsp_doorbell_shutdown(struct mbox_chan *chan)
{
struct tegra_hsp_doorbell *db = chan->con_priv;
struct tegra_hsp *hsp = db->channel.hsp;
struct tegra_hsp_doorbell *ccplex;
unsigned long flags;
u32 value;
ccplex = tegra_hsp_doorbell_get(hsp, TEGRA_HSP_DB_MASTER_CCPLEX);
if (!ccplex)
return;
spin_lock_irqsave(&hsp->lock, flags);
value = tegra_hsp_channel_readl(&ccplex->channel, HSP_DB_ENABLE);
value &= ~BIT(db->master);
tegra_hsp_channel_writel(&ccplex->channel, value, HSP_DB_ENABLE);
spin_unlock_irqrestore(&hsp->lock, flags);
}
static const struct mbox_chan_ops tegra_hsp_doorbell_ops = {
.send_data = tegra_hsp_doorbell_send_data,
.startup = tegra_hsp_doorbell_startup,
.shutdown = tegra_hsp_doorbell_shutdown,
};
static struct mbox_chan *of_tegra_hsp_xlate(struct mbox_controller *mbox,
const struct of_phandle_args *args)
{
struct tegra_hsp_channel *channel = ERR_PTR(-ENODEV);
struct tegra_hsp *hsp = to_tegra_hsp(mbox);
unsigned int type = args->args[0];
unsigned int master = args->args[1];
struct tegra_hsp_doorbell *db;
struct mbox_chan *chan;
unsigned long flags;
unsigned int i;
switch (type) {
case TEGRA_HSP_MBOX_TYPE_DB:
db = tegra_hsp_doorbell_get(hsp, master);
if (db)
channel = &db->channel;
break;
default:
break;
}
if (IS_ERR(channel))
return ERR_CAST(channel);
spin_lock_irqsave(&hsp->lock, flags);
for (i = 0; i < hsp->mbox.num_chans; i++) {
chan = &hsp->mbox.chans[i];
if (!chan->con_priv) {
chan->con_priv = channel;
channel->chan = chan;
break;
}
chan = NULL;
}
spin_unlock_irqrestore(&hsp->lock, flags);
return chan ?: ERR_PTR(-EBUSY);
}
static void tegra_hsp_remove_doorbells(struct tegra_hsp *hsp)
{
struct tegra_hsp_doorbell *db, *tmp;
unsigned long flags;
spin_lock_irqsave(&hsp->lock, flags);
list_for_each_entry_safe(db, tmp, &hsp->doorbells, list)
__tegra_hsp_doorbell_destroy(db);
spin_unlock_irqrestore(&hsp->lock, flags);
}
static int tegra_hsp_add_doorbells(struct tegra_hsp *hsp)
{
const struct tegra_hsp_db_map *map = hsp->soc->map;
struct tegra_hsp_channel *channel;
while (map->name) {
channel = tegra_hsp_doorbell_create(hsp, map->name,
map->master, map->index);
if (IS_ERR(channel)) {
tegra_hsp_remove_doorbells(hsp);
return PTR_ERR(channel);
}
map++;
}
return 0;
}
static int tegra_hsp_probe(struct platform_device *pdev)
{
struct tegra_hsp *hsp;
struct resource *res;
u32 value;
int err;
hsp = devm_kzalloc(&pdev->dev, sizeof(*hsp), GFP_KERNEL);
if (!hsp)
return -ENOMEM;
hsp->soc = of_device_get_match_data(&pdev->dev);
INIT_LIST_HEAD(&hsp->doorbells);
spin_lock_init(&hsp->lock);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
hsp->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(hsp->regs))
return PTR_ERR(hsp->regs);
value = tegra_hsp_readl(hsp, HSP_INT_DIMENSIONING);
hsp->num_sm = (value >> HSP_nSM_SHIFT) & HSP_nINT_MASK;
hsp->num_ss = (value >> HSP_nSS_SHIFT) & HSP_nINT_MASK;
hsp->num_as = (value >> HSP_nAS_SHIFT) & HSP_nINT_MASK;
hsp->num_db = (value >> HSP_nDB_SHIFT) & HSP_nINT_MASK;
hsp->num_si = (value >> HSP_nSI_SHIFT) & HSP_nINT_MASK;
err = platform_get_irq_byname(pdev, "doorbell");
if (err < 0) {
dev_err(&pdev->dev, "failed to get doorbell IRQ: %d\n", err);
return err;
}
hsp->irq = err;
hsp->mbox.of_xlate = of_tegra_hsp_xlate;
hsp->mbox.num_chans = 32;
hsp->mbox.dev = &pdev->dev;
hsp->mbox.txdone_irq = false;
hsp->mbox.txdone_poll = false;
hsp->mbox.ops = &tegra_hsp_doorbell_ops;
hsp->mbox.chans = devm_kcalloc(&pdev->dev, hsp->mbox.num_chans,
sizeof(*hsp->mbox.chans),
GFP_KERNEL);
if (!hsp->mbox.chans)
return -ENOMEM;
err = tegra_hsp_add_doorbells(hsp);
if (err < 0) {
dev_err(&pdev->dev, "failed to add doorbells: %d\n", err);
return err;
}
platform_set_drvdata(pdev, hsp);
err = mbox_controller_register(&hsp->mbox);
if (err) {
dev_err(&pdev->dev, "failed to register mailbox: %d\n", err);
tegra_hsp_remove_doorbells(hsp);
return err;
}
err = devm_request_irq(&pdev->dev, hsp->irq, tegra_hsp_doorbell_irq,
IRQF_NO_SUSPEND, dev_name(&pdev->dev), hsp);
if (err < 0) {
dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n",
hsp->irq, err);
return err;
}
return 0;
}
static int tegra_hsp_remove(struct platform_device *pdev)
{
struct tegra_hsp *hsp = platform_get_drvdata(pdev);
mbox_controller_unregister(&hsp->mbox);
tegra_hsp_remove_doorbells(hsp);
return 0;
}
static const struct tegra_hsp_db_map tegra186_hsp_db_map[] = {
{ "ccplex", TEGRA_HSP_DB_MASTER_CCPLEX, HSP_DB_CCPLEX, },
{ "bpmp", TEGRA_HSP_DB_MASTER_BPMP, HSP_DB_BPMP, },
{ /* sentinel */ }
};
static const struct tegra_hsp_soc tegra186_hsp_soc = {
.map = tegra186_hsp_db_map,
};
static const struct of_device_id tegra_hsp_match[] = {
{ .compatible = "nvidia,tegra186-hsp", .data = &tegra186_hsp_soc },
{ }
};
static struct platform_driver tegra_hsp_driver = {
.driver = {
.name = "tegra-hsp",
.of_match_table = tegra_hsp_match,
},
.probe = tegra_hsp_probe,
.remove = tegra_hsp_remove,
};
static int __init tegra_hsp_init(void)
{
return platform_driver_register(&tegra_hsp_driver);
}
core_initcall(tegra_hsp_init);
此差异已折叠。
......@@ -17,6 +17,7 @@ obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o
obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o
obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o
obj-$(CONFIG_MTK_SMI) += mtk-smi.o
obj-$(CONFIG_DA8XX_DDRCTL) += da8xx-ddrctl.o
obj-$(CONFIG_SAMSUNG_MC) += samsung/
obj-$(CONFIG_TEGRA_MC) += tegra/
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
obj-$(CONFIG_RESET_TEGRA_BPMP) += reset-bpmp.o
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册