提交 801fc022 编写于 作者: T Thierry Reding

Merge branch 'for-4.10/firmware' into for-4.10/reset

NVIDIA Tegra Boot and Power Management Processor (BPMP)
The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.
Required properties:
- name : Should be bpmp
- compatible
Array of strings
One of:
- "nvidia,tegra186-bpmp"
- mboxes : The phandle of mailbox controller and the mailbox specifier.
- shmem : List of the phandle of the TX and RX shared memory area that
the IPC between CPU and BPMP is based on.
- #clock-cells : Should be 1.
- #power-domain-cells : Should be 1.
- #reset-cells : Should be 1.
This node is a mailbox consumer. See the following files for details of
the mailbox subsystem, and the specifiers implemented by the relevant
provider(s):
- .../mailbox/mailbox.txt
- .../mailbox/nvidia,tegra186-hsp.txt
This node is a clock, power domain, and reset provider. See the following
files for general documentation of those features, and the specifiers
implemented by this node:
- .../clock/clock-bindings.txt
- <dt-bindings/clock/tegra186-clock.h>
- ../power/power_domain.txt
- <dt-bindings/power/tegra186-powergate.h>
- .../reset/reset.txt
- <dt-bindings/reset/tegra186-reset.h>
The BPMP implements some services which must be represented by separate nodes.
For example, it can provide access to certain I2C controllers, and the I2C
bindings represent each I2C controller as a device tree node. Such nodes should
be nested directly inside the main BPMP node.
Software can determine whether a child node of the BPMP node represents a device
by checking for a compatible property. Any node with a compatible property
represents a device that can be instantiated. Nodes without a compatible
property may be used to provide configuration information regarding the BPMP
itself, although no such configuration nodes are currently defined by this
binding.
The BPMP firmware defines no single global name-/numbering-space for such
services. Put another way, the numbering scheme for I2C buses is distinct from
the numbering scheme for any other service the BPMP may provide (e.g. a future
hypothetical SPI bus service). As such, child device nodes will have no reg
property, and the BPMP node will have no #address-cells or #size-cells property.
The shared memory bindings for BPMP
-----------------------------------
The shared memory area for the IPC TX and RX between CPU and BPMP are
predefined and work on top of sysram, which is an SRAM inside the chip.
See ".../sram/sram.txt" for the bindings.
Example:
hsp_top0: hsp@03c00000 {
...
#mbox-cells = <2>;
};
sysram@30000000 {
compatible = "nvidia,tegra186-sysram", "mmio-sram";
reg = <0x0 0x30000000 0x0 0x50000>;
#address-cells = <2>;
#size-cells = <2>;
ranges = <0 0x0 0x0 0x30000000 0x0 0x50000>;
cpu_bpmp_tx: shmem@4e000 {
compatible = "nvidia,tegra186-bpmp-shmem";
reg = <0x0 0x4e000 0x0 0x1000>;
label = "cpu-bpmp-tx";
pool;
};
cpu_bpmp_rx: shmem@4f000 {
compatible = "nvidia,tegra186-bpmp-shmem";
reg = <0x0 0x4f000 0x0 0x1000>;
label = "cpu-bpmp-rx";
pool;
};
};
bpmp {
compatible = "nvidia,tegra186-bpmp";
mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_BPMP>;
shmem = <&cpu_bpmp_tx &cpu_bpmp_rx>;
#clock-cells = <1>;
#power-domain-cells = <1>;
#reset-cells = <1>;
i2c {
compatible = "...";
...
};
};
NVIDIA Tegra Hardware Synchronization Primitives (HSP)
The HSP modules are used for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication (IPC)
protocols can use hardware synchronization primitives, when operating between
two processors not in an SMP relationship.
The features that HSP supported are shared mailboxes, shared semaphores,
arbitrated semaphores and doorbells.
Required properties:
- name : Should be hsp
- compatible
Array of strings.
one of:
- "nvidia,tegra186-hsp"
- reg : Offset and length of the register set for the device.
- interrupt-names
Array of strings.
Contains a list of names for the interrupts described by the interrupt
property. May contain the following entries, in any order:
- "doorbell"
Users of this binding MUST look up entries in the interrupt property
by name, using this interrupt-names property to do so.
- interrupts
Array of interrupt specifiers.
Must contain one entry per entry in the interrupt-names property,
in a matching order.
- #mbox-cells : Should be 2.
The mbox specifier of the "mboxes" property in the client node should
contain two data. The first one should be the HSP type and the second
one should be the ID that the client is going to use. Those information
can be found in the following file.
- <dt-bindings/mailbox/tegra186-hsp.h>.
Example:
hsp_top0: hsp@3c00000 {
compatible = "nvidia,tegra186-hsp";
reg = <0x0 0x03c00000 0x0 0xa0000>;
interrupts = <GIC_SPI 176 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "doorbell";
#mbox-cells = <2>;
};
client {
...
mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_XXX>;
};
......@@ -210,5 +210,6 @@ source "drivers/firmware/broadcom/Kconfig"
source "drivers/firmware/google/Kconfig"
source "drivers/firmware/efi/Kconfig"
source "drivers/firmware/meson/Kconfig"
source "drivers/firmware/tegra/Kconfig"
endmenu
......@@ -26,3 +26,4 @@ obj-y += meson/
obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
obj-$(CONFIG_EFI) += efi/
obj-$(CONFIG_UEFI_CPER) += efi/
obj-y += tegra/
menu "Tegra firmware driver"
config TEGRA_IVC
bool "Tegra IVC protocol"
depends on ARCH_TEGRA
help
IVC (Inter-VM Communication) protocol is part of the IPC
(Inter Processor Communication) framework on Tegra. It maintains the
data and the different commuication channels in SysRAM or RAM and
keeps the content is synchronization between host CPU and remote
processors.
config TEGRA_BPMP
bool "Tegra BPMP driver"
depends on ARCH_TEGRA && TEGRA_HSP_MBOX && TEGRA_IVC
help
BPMP (Boot and Power Management Processor) is designed to off-loading
the PM functions which include clock/DVFS/thermal/power from the CPU.
It needs HSP as the HW synchronization and notification module and
IVC module as the message communication protocol.
This driver manages the IPC interface between host CPU and the
firmware running on BPMP.
endmenu
obj-$(CONFIG_TEGRA_BPMP) += bpmp.o
obj-$(CONFIG_TEGRA_IVC) += ivc.o
此差异已折叠。
/*
* Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#include <soc/tegra/ivc.h>
#define TEGRA_IVC_ALIGN 64
/*
* IVC channel reset protocol.
*
* Each end uses its tx_channel.state to indicate its synchronization state.
*/
enum tegra_ivc_state {
/*
* This value is zero for backwards compatibility with services that
* assume channels to be initially zeroed. Such channels are in an
* initially valid state, but cannot be asynchronously reset, and must
* maintain a valid state at all times.
*
* The transmitting end can enter the established state from the sync or
* ack state when it observes the receiving endpoint in the ack or
* established state, indicating that has cleared the counters in our
* rx_channel.
*/
TEGRA_IVC_STATE_ESTABLISHED = 0,
/*
* If an endpoint is observed in the sync state, the remote endpoint is
* allowed to clear the counters it owns asynchronously with respect to
* the current endpoint. Therefore, the current endpoint is no longer
* allowed to communicate.
*/
TEGRA_IVC_STATE_SYNC,
/*
* When the transmitting end observes the receiving end in the sync
* state, it can clear the w_count and r_count and transition to the ack
* state. If the remote endpoint observes us in the ack state, it can
* return to the established state once it has cleared its counters.
*/
TEGRA_IVC_STATE_ACK
};
/*
* This structure is divided into two-cache aligned parts, the first is only
* written through the tx.channel pointer, while the second is only written
* through the rx.channel pointer. This delineates ownership of the cache
* lines, which is critical to performance and necessary in non-cache coherent
* implementations.
*/
struct tegra_ivc_header {
union {
struct {
/* fields owned by the transmitting end */
u32 count;
u32 state;
};
u8 pad[TEGRA_IVC_ALIGN];
} tx;
union {
/* fields owned by the receiving end */
u32 count;
u8 pad[TEGRA_IVC_ALIGN];
} rx;
};
static inline void tegra_ivc_invalidate(struct tegra_ivc *ivc, dma_addr_t phys)
{
if (!ivc->peer)
return;
dma_sync_single_for_cpu(ivc->peer, phys, TEGRA_IVC_ALIGN,
DMA_FROM_DEVICE);
}
static inline void tegra_ivc_flush(struct tegra_ivc *ivc, dma_addr_t phys)
{
if (!ivc->peer)
return;
dma_sync_single_for_device(ivc->peer, phys, TEGRA_IVC_ALIGN,
DMA_TO_DEVICE);
}
static inline bool tegra_ivc_empty(struct tegra_ivc *ivc,
struct tegra_ivc_header *header)
{
/*
* This function performs multiple checks on the same values with
* security implications, so create snapshots with ACCESS_ONCE() to
* ensure that these checks use the same values.
*/
u32 tx = ACCESS_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count);
/*
* Perform an over-full check to prevent denial of service attacks
* where a server could be easily fooled into believing that there's
* an extremely large number of frames ready, since receivers are not
* expected to check for full or over-full conditions.
*
* Although the channel isn't empty, this is an invalid case caused by
* a potentially malicious peer, so returning empty is safer, because
* it gives the impression that the channel has gone silent.
*/
if (tx - rx > ivc->num_frames)
return true;
return tx == rx;
}
static inline bool tegra_ivc_full(struct tegra_ivc *ivc,
struct tegra_ivc_header *header)
{
u32 tx = ACCESS_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count);
/*
* Invalid cases where the counters indicate that the queue is over
* capacity also appear full.
*/
return tx - rx >= ivc->num_frames;
}
static inline u32 tegra_ivc_available(struct tegra_ivc *ivc,
struct tegra_ivc_header *header)
{
u32 tx = ACCESS_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count);
/*
* This function isn't expected to be used in scenarios where an
* over-full situation can lead to denial of service attacks. See the
* comment in tegra_ivc_empty() for an explanation about special
* over-full considerations.
*/
return tx - rx;
}
static inline void tegra_ivc_advance_tx(struct tegra_ivc *ivc)
{
ACCESS_ONCE(ivc->tx.channel->tx.count) =
ACCESS_ONCE(ivc->tx.channel->tx.count) + 1;
if (ivc->tx.position == ivc->num_frames - 1)
ivc->tx.position = 0;
else
ivc->tx.position++;
}
static inline void tegra_ivc_advance_rx(struct tegra_ivc *ivc)
{
ACCESS_ONCE(ivc->rx.channel->rx.count) =
ACCESS_ONCE(ivc->rx.channel->rx.count) + 1;
if (ivc->rx.position == ivc->num_frames - 1)
ivc->rx.position = 0;
else
ivc->rx.position++;
}
static inline int tegra_ivc_check_read(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* tx.channel->state is set locally, so it is not synchronized with
* state from the remote peer. The remote peer cannot reset its
* transmit counters until we've acknowledged its synchronization
* request, so no additional synchronization is required because an
* asynchronous transition of rx.channel->state to
* TEGRA_IVC_STATE_ACK is not allowed.
*/
if (ivc->tx.channel->tx.state != TEGRA_IVC_STATE_ESTABLISHED)
return -ECONNRESET;
/*
* Avoid unnecessary invalidations when performing repeated accesses
* to an IVC channel by checking the old queue pointers first.
*
* Synchronization is only necessary when these pointers indicate
* empty or full.
*/
if (!tegra_ivc_empty(ivc, ivc->rx.channel))
return 0;
tegra_ivc_invalidate(ivc, ivc->rx.phys + offset);
if (tegra_ivc_empty(ivc, ivc->rx.channel))
return -ENOSPC;
return 0;
}
static inline int tegra_ivc_check_write(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, rx.count);
if (ivc->tx.channel->tx.state != TEGRA_IVC_STATE_ESTABLISHED)
return -ECONNRESET;
if (!tegra_ivc_full(ivc, ivc->tx.channel))
return 0;
tegra_ivc_invalidate(ivc, ivc->tx.phys + offset);
if (tegra_ivc_full(ivc, ivc->tx.channel))
return -ENOSPC;
return 0;
}
static void *tegra_ivc_frame_virt(struct tegra_ivc *ivc,
struct tegra_ivc_header *header,
unsigned int frame)
{
if (WARN_ON(frame >= ivc->num_frames))
return ERR_PTR(-EINVAL);
return (void *)(header + 1) + ivc->frame_size * frame;
}
static inline dma_addr_t tegra_ivc_frame_phys(struct tegra_ivc *ivc,
dma_addr_t phys,
unsigned int frame)
{
unsigned long offset;
offset = sizeof(struct tegra_ivc_header) + ivc->frame_size * frame;
return phys + offset;
}
static inline void tegra_ivc_invalidate_frame(struct tegra_ivc *ivc,
dma_addr_t phys,
unsigned int frame,
unsigned int offset,
size_t size)
{
if (!ivc->peer || WARN_ON(frame >= ivc->num_frames))
return;
phys = tegra_ivc_frame_phys(ivc, phys, frame) + offset;
dma_sync_single_for_cpu(ivc->peer, phys, size, DMA_FROM_DEVICE);
}
static inline void tegra_ivc_flush_frame(struct tegra_ivc *ivc,
dma_addr_t phys,
unsigned int frame,
unsigned int offset,
size_t size)
{
if (!ivc->peer || WARN_ON(frame >= ivc->num_frames))
return;
phys = tegra_ivc_frame_phys(ivc, phys, frame) + offset;
dma_sync_single_for_device(ivc->peer, phys, size, DMA_TO_DEVICE);
}
/* directly peek at the next frame rx'ed */
void *tegra_ivc_read_get_next_frame(struct tegra_ivc *ivc)
{
int err;
if (WARN_ON(ivc == NULL))
return ERR_PTR(-EINVAL);
err = tegra_ivc_check_read(ivc);
if (err < 0)
return ERR_PTR(err);
/*
* Order observation of ivc->rx.position potentially indicating new
* data before data read.
*/
smp_rmb();
tegra_ivc_invalidate_frame(ivc, ivc->rx.phys, ivc->rx.position, 0,
ivc->frame_size);
return tegra_ivc_frame_virt(ivc, ivc->rx.channel, ivc->rx.position);
}
EXPORT_SYMBOL(tegra_ivc_read_get_next_frame);
int tegra_ivc_read_advance(struct tegra_ivc *ivc)
{
unsigned int rx = offsetof(struct tegra_ivc_header, rx.count);
unsigned int tx = offsetof(struct tegra_ivc_header, tx.count);
int err;
/*
* No read barriers or synchronization here: the caller is expected to
* have already observed the channel non-empty. This check is just to
* catch programming errors.
*/
err = tegra_ivc_check_read(ivc);
if (err < 0)
return err;
tegra_ivc_advance_rx(ivc);
tegra_ivc_flush(ivc, ivc->rx.phys + rx);
/*
* Ensure our write to ivc->rx.position occurs before our read from
* ivc->tx.position.
*/
smp_mb();
/*
* Notify only upon transition from full to non-full. The available
* count can only asynchronously increase, so the worst possible
* side-effect will be a spurious notification.
*/
tegra_ivc_invalidate(ivc, ivc->rx.phys + tx);
if (tegra_ivc_available(ivc, ivc->rx.channel) == ivc->num_frames - 1)
ivc->notify(ivc, ivc->notify_data);
return 0;
}
EXPORT_SYMBOL(tegra_ivc_read_advance);
/* directly poke at the next frame to be tx'ed */
void *tegra_ivc_write_get_next_frame(struct tegra_ivc *ivc)
{
int err;
err = tegra_ivc_check_write(ivc);
if (err < 0)
return ERR_PTR(err);
return tegra_ivc_frame_virt(ivc, ivc->tx.channel, ivc->tx.position);
}
EXPORT_SYMBOL(tegra_ivc_write_get_next_frame);
/* advance the tx buffer */
int tegra_ivc_write_advance(struct tegra_ivc *ivc)
{
unsigned int tx = offsetof(struct tegra_ivc_header, tx.count);
unsigned int rx = offsetof(struct tegra_ivc_header, rx.count);
int err;
err = tegra_ivc_check_write(ivc);
if (err < 0)
return err;
tegra_ivc_flush_frame(ivc, ivc->tx.phys, ivc->tx.position, 0,
ivc->frame_size);
/*
* Order any possible stores to the frame before update of
* ivc->tx.position.
*/
smp_wmb();
tegra_ivc_advance_tx(ivc);
tegra_ivc_flush(ivc, ivc->tx.phys + tx);
/*
* Ensure our write to ivc->tx.position occurs before our read from
* ivc->rx.position.
*/
smp_mb();
/*
* Notify only upon transition from empty to non-empty. The available
* count can only asynchronously decrease, so the worst possible
* side-effect will be a spurious notification.
*/
tegra_ivc_invalidate(ivc, ivc->tx.phys + rx);
if (tegra_ivc_available(ivc, ivc->tx.channel) == 1)
ivc->notify(ivc, ivc->notify_data);
return 0;
}
EXPORT_SYMBOL(tegra_ivc_write_advance);
void tegra_ivc_reset(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, tx.count);
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_SYNC;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
ivc->notify(ivc, ivc->notify_data);
}
EXPORT_SYMBOL(tegra_ivc_reset);
/*
* =======================================================
* IVC State Transition Table - see tegra_ivc_notified()
* =======================================================
*
* local remote action
* ----- ------ -----------------------------------
* SYNC EST <none>
* SYNC ACK reset counters; move to EST; notify
* SYNC SYNC reset counters; move to ACK; notify
* ACK EST move to EST; notify
* ACK ACK move to EST; notify
* ACK SYNC reset counters; move to ACK; notify
* EST EST <none>
* EST ACK <none>
* EST SYNC reset counters; move to ACK; notify
*
* ===============================================================
*/
int tegra_ivc_notified(struct tegra_ivc *ivc)
{
unsigned int offset = offsetof(struct tegra_ivc_header, tx.count);
enum tegra_ivc_state state;
/* Copy the receiver's state out of shared memory. */
tegra_ivc_invalidate(ivc, ivc->rx.phys + offset);
state = ACCESS_ONCE(ivc->rx.channel->tx.state);
if (state == TEGRA_IVC_STATE_SYNC) {
offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* Order observation of TEGRA_IVC_STATE_SYNC before stores
* clearing tx.channel.
*/
smp_rmb();
/*
* Reset tx.channel counters. The remote end is in the SYNC
* state and won't make progress until we change our state,
* so the counters are not in use at this time.
*/
ivc->tx.channel->tx.count = 0;
ivc->rx.channel->rx.count = 0;
ivc->tx.position = 0;
ivc->rx.position = 0;
/*
* Ensure that counters appear cleared before new state can be
* observed.
*/
smp_wmb();
/*
* Move to ACK state. We have just cleared our counters, so it
* is now safe for the remote end to start using these values.
*/
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_ACK;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
/*
* Notify remote end to observe state transition.
*/
ivc->notify(ivc, ivc->notify_data);
} else if (ivc->tx.channel->tx.state == TEGRA_IVC_STATE_SYNC &&
state == TEGRA_IVC_STATE_ACK) {
offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* Order observation of ivc_state_sync before stores clearing
* tx_channel.
*/
smp_rmb();
/*
* Reset tx.channel counters. The remote end is in the ACK
* state and won't make progress until we change our state,
* so the counters are not in use at this time.
*/
ivc->tx.channel->tx.count = 0;
ivc->rx.channel->rx.count = 0;
ivc->tx.position = 0;
ivc->rx.position = 0;
/*
* Ensure that counters appear cleared before new state can be
* observed.
*/
smp_wmb();
/*
* Move to ESTABLISHED state. We know that the remote end has
* already cleared its counters, so it is safe to start
* writing/reading on this channel.
*/
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_ESTABLISHED;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
/*
* Notify remote end to observe state transition.
*/
ivc->notify(ivc, ivc->notify_data);
} else if (ivc->tx.channel->tx.state == TEGRA_IVC_STATE_ACK) {
offset = offsetof(struct tegra_ivc_header, tx.count);
/*
* At this point, we have observed the peer to be in either
* the ACK or ESTABLISHED state. Next, order observation of
* peer state before storing to tx.channel.
*/
smp_rmb();
/*
* Move to ESTABLISHED state. We know that we have previously
* cleared our counters, and we know that the remote end has
* cleared its counters, so it is safe to start writing/reading
* on this channel.
*/
ivc->tx.channel->tx.state = TEGRA_IVC_STATE_ESTABLISHED;
tegra_ivc_flush(ivc, ivc->tx.phys + offset);
/*
* Notify remote end to observe state transition.
*/
ivc->notify(ivc, ivc->notify_data);
} else {
/*
* There is no need to handle any further action. Either the
* channel is already fully established, or we are waiting for
* the remote end to catch up with our current state. Refer
* to the diagram in "IVC State Transition Table" above.
*/
}
if (ivc->tx.channel->tx.state != TEGRA_IVC_STATE_ESTABLISHED)
return -EAGAIN;
return 0;
}
EXPORT_SYMBOL(tegra_ivc_notified);
size_t tegra_ivc_align(size_t size)
{
return ALIGN(size, TEGRA_IVC_ALIGN);
}
EXPORT_SYMBOL(tegra_ivc_align);
unsigned tegra_ivc_total_queue_size(unsigned queue_size)
{
if (!IS_ALIGNED(queue_size, TEGRA_IVC_ALIGN)) {
pr_err("%s: queue_size (%u) must be %u-byte aligned\n",
__func__, queue_size, TEGRA_IVC_ALIGN);
return 0;
}
return queue_size + sizeof(struct tegra_ivc_header);
}
EXPORT_SYMBOL(tegra_ivc_total_queue_size);
static int tegra_ivc_check_params(unsigned long rx, unsigned long tx,
unsigned int num_frames, size_t frame_size)
{
BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct tegra_ivc_header, tx.count),
TEGRA_IVC_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct tegra_ivc_header, rx.count),
TEGRA_IVC_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(sizeof(struct tegra_ivc_header),
TEGRA_IVC_ALIGN));
if ((uint64_t)num_frames * (uint64_t)frame_size >= 0x100000000UL) {
pr_err("num_frames * frame_size overflows\n");
return -EINVAL;
}
if (!IS_ALIGNED(frame_size, TEGRA_IVC_ALIGN)) {
pr_err("frame size not adequately aligned: %zu\n", frame_size);
return -EINVAL;
}
/*
* The headers must at least be aligned enough for counters
* to be accessed atomically.
*/
if (!IS_ALIGNED(rx, TEGRA_IVC_ALIGN)) {
pr_err("IVC channel start not aligned: %#lx\n", rx);
return -EINVAL;
}
if (!IS_ALIGNED(tx, TEGRA_IVC_ALIGN)) {
pr_err("IVC channel start not aligned: %#lx\n", tx);
return -EINVAL;
}
if (rx < tx) {
if (rx + frame_size * num_frames > tx) {
pr_err("queue regions overlap: %#lx + %zx > %#lx\n",
rx, frame_size * num_frames, tx);
return -EINVAL;
}
} else {
if (tx + frame_size * num_frames > rx) {
pr_err("queue regions overlap: %#lx + %zx > %#lx\n",
tx, frame_size * num_frames, rx);
return -EINVAL;
}
}
return 0;
}
int tegra_ivc_init(struct tegra_ivc *ivc, struct device *peer, void *rx,
dma_addr_t rx_phys, void *tx, dma_addr_t tx_phys,
unsigned int num_frames, size_t frame_size,
void (*notify)(struct tegra_ivc *ivc, void *data),
void *data)
{
size_t queue_size;
int err;
if (WARN_ON(!ivc || !notify))
return -EINVAL;
/*
* All sizes that can be returned by communication functions should
* fit in an int.
*/
if (frame_size > INT_MAX)
return -E2BIG;
err = tegra_ivc_check_params((unsigned long)rx, (unsigned long)tx,
num_frames, frame_size);
if (err < 0)
return err;
queue_size = tegra_ivc_total_queue_size(num_frames * frame_size);
if (peer) {
ivc->rx.phys = dma_map_single(peer, rx, queue_size,
DMA_BIDIRECTIONAL);
if (ivc->rx.phys == DMA_ERROR_CODE)
return -ENOMEM;
ivc->tx.phys = dma_map_single(peer, tx, queue_size,
DMA_BIDIRECTIONAL);
if (ivc->tx.phys == DMA_ERROR_CODE) {
dma_unmap_single(peer, ivc->rx.phys, queue_size,
DMA_BIDIRECTIONAL);
return -ENOMEM;
}
} else {
ivc->rx.phys = rx_phys;
ivc->tx.phys = tx_phys;
}
ivc->rx.channel = rx;
ivc->tx.channel = tx;
ivc->peer = peer;
ivc->notify = notify;
ivc->notify_data = data;
ivc->frame_size = frame_size;
ivc->num_frames = num_frames;
/*
* These values aren't necessarily correct until the channel has been
* reset.
*/
ivc->tx.position = 0;
ivc->rx.position = 0;
return 0;
}
EXPORT_SYMBOL(tegra_ivc_init);
void tegra_ivc_cleanup(struct tegra_ivc *ivc)
{
if (ivc->peer) {
size_t size = tegra_ivc_total_queue_size(ivc->num_frames *
ivc->frame_size);
dma_unmap_single(ivc->peer, ivc->rx.phys, size,
DMA_BIDIRECTIONAL);
dma_unmap_single(ivc->peer, ivc->tx.phys, size,
DMA_BIDIRECTIONAL);
}
}
EXPORT_SYMBOL(tegra_ivc_cleanup);
......@@ -124,6 +124,15 @@ config MAILBOX_TEST
Test client to help with testing new Controller driver
implementations.
config TEGRA_HSP_MBOX
bool "Tegra HSP (Hardware Synchronization Primitives) Driver"
depends on ARCH_TEGRA_186_SOC
help
The Tegra HSP driver is used for the interprocessor communication
between different remote processors and host processors on Tegra186
and later SoCs. Say Y here if you want to have this support.
If unsure say N.
config XGENE_SLIMPRO_MBOX
tristate "APM SoC X-Gene SLIMpro Mailbox Controller"
depends on ARCH_XGENE
......
......@@ -29,3 +29,5 @@ obj-$(CONFIG_XGENE_SLIMPRO_MBOX) += mailbox-xgene-slimpro.o
obj-$(CONFIG_HI6220_MBOX) += hi6220-mailbox.o
obj-$(CONFIG_BCM_PDC_MBOX) += bcm-pdc-mailbox.o
obj-$(CONFIG_TEGRA_HSP_MBOX) += tegra-hsp.o
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/mailbox_controller.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <dt-bindings/mailbox/tegra186-hsp.h>
#define HSP_INT_DIMENSIONING 0x380
#define HSP_nSM_SHIFT 0
#define HSP_nSS_SHIFT 4
#define HSP_nAS_SHIFT 8
#define HSP_nDB_SHIFT 12
#define HSP_nSI_SHIFT 16
#define HSP_nINT_MASK 0xf
#define HSP_DB_TRIGGER 0x0
#define HSP_DB_ENABLE 0x4
#define HSP_DB_RAW 0x8
#define HSP_DB_PENDING 0xc
#define HSP_DB_CCPLEX 1
#define HSP_DB_BPMP 3
#define HSP_DB_MAX 7
struct tegra_hsp_channel;
struct tegra_hsp;
struct tegra_hsp_channel {
struct tegra_hsp *hsp;
struct mbox_chan *chan;
void __iomem *regs;
};
struct tegra_hsp_doorbell {
struct tegra_hsp_channel channel;
struct list_head list;
const char *name;
unsigned int master;
unsigned int index;
};
struct tegra_hsp_db_map {
const char *name;
unsigned int master;
unsigned int index;
};
struct tegra_hsp_soc {
const struct tegra_hsp_db_map *map;
};
struct tegra_hsp {
const struct tegra_hsp_soc *soc;
struct mbox_controller mbox;
void __iomem *regs;
unsigned int irq;
unsigned int num_sm;
unsigned int num_as;
unsigned int num_ss;
unsigned int num_db;
unsigned int num_si;
spinlock_t lock;
struct list_head doorbells;
};
static inline struct tegra_hsp *
to_tegra_hsp(struct mbox_controller *mbox)
{
return container_of(mbox, struct tegra_hsp, mbox);
}
static inline u32 tegra_hsp_readl(struct tegra_hsp *hsp, unsigned int offset)
{
return readl(hsp->regs + offset);
}
static inline void tegra_hsp_writel(struct tegra_hsp *hsp, u32 value,
unsigned int offset)
{
writel(value, hsp->regs + offset);
}
static inline u32 tegra_hsp_channel_readl(struct tegra_hsp_channel *channel,
unsigned int offset)
{
return readl(channel->regs + offset);
}
static inline void tegra_hsp_channel_writel(struct tegra_hsp_channel *channel,
u32 value, unsigned int offset)
{
writel(value, channel->regs + offset);
}
static bool tegra_hsp_doorbell_can_ring(struct tegra_hsp_doorbell *db)
{
u32 value;
value = tegra_hsp_channel_readl(&db->channel, HSP_DB_ENABLE);
return (value & BIT(TEGRA_HSP_DB_MASTER_CCPLEX)) != 0;
}
static struct tegra_hsp_doorbell *
__tegra_hsp_doorbell_get(struct tegra_hsp *hsp, unsigned int master)
{
struct tegra_hsp_doorbell *entry;
list_for_each_entry(entry, &hsp->doorbells, list)
if (entry->master == master)
return entry;
return NULL;
}
static struct tegra_hsp_doorbell *
tegra_hsp_doorbell_get(struct tegra_hsp *hsp, unsigned int master)
{
struct tegra_hsp_doorbell *db;
unsigned long flags;
spin_lock_irqsave(&hsp->lock, flags);
db = __tegra_hsp_doorbell_get(hsp, master);
spin_unlock_irqrestore(&hsp->lock, flags);
return db;
}
static irqreturn_t tegra_hsp_doorbell_irq(int irq, void *data)
{
struct tegra_hsp *hsp = data;
struct tegra_hsp_doorbell *db;
unsigned long master, value;
db = tegra_hsp_doorbell_get(hsp, TEGRA_HSP_DB_MASTER_CCPLEX);
if (!db)
return IRQ_NONE;
value = tegra_hsp_channel_readl(&db->channel, HSP_DB_PENDING);
tegra_hsp_channel_writel(&db->channel, value, HSP_DB_PENDING);
spin_lock(&hsp->lock);
for_each_set_bit(master, &value, hsp->mbox.num_chans) {
struct tegra_hsp_doorbell *db;
db = __tegra_hsp_doorbell_get(hsp, master);
/*
* Depending on the bootloader chain, the CCPLEX doorbell will
* have some doorbells enabled, which means that requesting an
* interrupt will immediately fire.
*
* In that case, db->channel.chan will still be NULL here and
* cause a crash if not properly guarded.
*
* It remains to be seen if ignoring the doorbell in that case
* is the correct solution.
*/
if (db && db->channel.chan)
mbox_chan_received_data(db->channel.chan, NULL);
}
spin_unlock(&hsp->lock);
return IRQ_HANDLED;
}
static struct tegra_hsp_channel *
tegra_hsp_doorbell_create(struct tegra_hsp *hsp, const char *name,
unsigned int master, unsigned int index)
{
struct tegra_hsp_doorbell *db;
unsigned int offset;
unsigned long flags;
db = kzalloc(sizeof(*db), GFP_KERNEL);
if (!db)
return ERR_PTR(-ENOMEM);
offset = (1 + (hsp->num_sm / 2) + hsp->num_ss + hsp->num_as) << 16;
offset += index * 0x100;
db->channel.regs = hsp->regs + offset;
db->channel.hsp = hsp;
db->name = kstrdup_const(name, GFP_KERNEL);
db->master = master;
db->index = index;
spin_lock_irqsave(&hsp->lock, flags);
list_add_tail(&db->list, &hsp->doorbells);
spin_unlock_irqrestore(&hsp->lock, flags);
return &db->channel;
}
static void __tegra_hsp_doorbell_destroy(struct tegra_hsp_doorbell *db)
{
list_del(&db->list);
kfree_const(db->name);
kfree(db);
}
static int tegra_hsp_doorbell_send_data(struct mbox_chan *chan, void *data)
{
struct tegra_hsp_doorbell *db = chan->con_priv;
tegra_hsp_channel_writel(&db->channel, 1, HSP_DB_TRIGGER);
return 0;
}
static int tegra_hsp_doorbell_startup(struct mbox_chan *chan)
{
struct tegra_hsp_doorbell *db = chan->con_priv;
struct tegra_hsp *hsp = db->channel.hsp;
struct tegra_hsp_doorbell *ccplex;
unsigned long flags;
u32 value;
if (db->master >= hsp->mbox.num_chans) {
dev_err(hsp->mbox.dev,
"invalid master ID %u for HSP channel\n",
db->master);
return -EINVAL;
}
ccplex = tegra_hsp_doorbell_get(hsp, TEGRA_HSP_DB_MASTER_CCPLEX);
if (!ccplex)
return -ENODEV;
if (!tegra_hsp_doorbell_can_ring(db))
return -ENODEV;
spin_lock_irqsave(&hsp->lock, flags);
value = tegra_hsp_channel_readl(&ccplex->channel, HSP_DB_ENABLE);
value |= BIT(db->master);
tegra_hsp_channel_writel(&ccplex->channel, value, HSP_DB_ENABLE);
spin_unlock_irqrestore(&hsp->lock, flags);
return 0;
}
static void tegra_hsp_doorbell_shutdown(struct mbox_chan *chan)
{
struct tegra_hsp_doorbell *db = chan->con_priv;
struct tegra_hsp *hsp = db->channel.hsp;
struct tegra_hsp_doorbell *ccplex;
unsigned long flags;
u32 value;
ccplex = tegra_hsp_doorbell_get(hsp, TEGRA_HSP_DB_MASTER_CCPLEX);
if (!ccplex)
return;
spin_lock_irqsave(&hsp->lock, flags);
value = tegra_hsp_channel_readl(&ccplex->channel, HSP_DB_ENABLE);
value &= ~BIT(db->master);
tegra_hsp_channel_writel(&ccplex->channel, value, HSP_DB_ENABLE);
spin_unlock_irqrestore(&hsp->lock, flags);
}
static const struct mbox_chan_ops tegra_hsp_doorbell_ops = {
.send_data = tegra_hsp_doorbell_send_data,
.startup = tegra_hsp_doorbell_startup,
.shutdown = tegra_hsp_doorbell_shutdown,
};
static struct mbox_chan *of_tegra_hsp_xlate(struct mbox_controller *mbox,
const struct of_phandle_args *args)
{
struct tegra_hsp_channel *channel = ERR_PTR(-ENODEV);
struct tegra_hsp *hsp = to_tegra_hsp(mbox);
unsigned int type = args->args[0];
unsigned int master = args->args[1];
struct tegra_hsp_doorbell *db;
struct mbox_chan *chan;
unsigned long flags;
unsigned int i;
switch (type) {
case TEGRA_HSP_MBOX_TYPE_DB:
db = tegra_hsp_doorbell_get(hsp, master);
if (db)
channel = &db->channel;
break;
default:
break;
}
if (IS_ERR(channel))
return ERR_CAST(channel);
spin_lock_irqsave(&hsp->lock, flags);
for (i = 0; i < hsp->mbox.num_chans; i++) {
chan = &hsp->mbox.chans[i];
if (!chan->con_priv) {
chan->con_priv = channel;
channel->chan = chan;
break;
}
chan = NULL;
}
spin_unlock_irqrestore(&hsp->lock, flags);
return chan ?: ERR_PTR(-EBUSY);
}
static void tegra_hsp_remove_doorbells(struct tegra_hsp *hsp)
{
struct tegra_hsp_doorbell *db, *tmp;
unsigned long flags;
spin_lock_irqsave(&hsp->lock, flags);
list_for_each_entry_safe(db, tmp, &hsp->doorbells, list)
__tegra_hsp_doorbell_destroy(db);
spin_unlock_irqrestore(&hsp->lock, flags);
}
static int tegra_hsp_add_doorbells(struct tegra_hsp *hsp)
{
const struct tegra_hsp_db_map *map = hsp->soc->map;
struct tegra_hsp_channel *channel;
while (map->name) {
channel = tegra_hsp_doorbell_create(hsp, map->name,
map->master, map->index);
if (IS_ERR(channel)) {
tegra_hsp_remove_doorbells(hsp);
return PTR_ERR(channel);
}
map++;
}
return 0;
}
static int tegra_hsp_probe(struct platform_device *pdev)
{
struct tegra_hsp *hsp;
struct resource *res;
u32 value;
int err;
hsp = devm_kzalloc(&pdev->dev, sizeof(*hsp), GFP_KERNEL);
if (!hsp)
return -ENOMEM;
hsp->soc = of_device_get_match_data(&pdev->dev);
INIT_LIST_HEAD(&hsp->doorbells);
spin_lock_init(&hsp->lock);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
hsp->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(hsp->regs))
return PTR_ERR(hsp->regs);
value = tegra_hsp_readl(hsp, HSP_INT_DIMENSIONING);
hsp->num_sm = (value >> HSP_nSM_SHIFT) & HSP_nINT_MASK;
hsp->num_ss = (value >> HSP_nSS_SHIFT) & HSP_nINT_MASK;
hsp->num_as = (value >> HSP_nAS_SHIFT) & HSP_nINT_MASK;
hsp->num_db = (value >> HSP_nDB_SHIFT) & HSP_nINT_MASK;
hsp->num_si = (value >> HSP_nSI_SHIFT) & HSP_nINT_MASK;
err = platform_get_irq_byname(pdev, "doorbell");
if (err < 0) {
dev_err(&pdev->dev, "failed to get doorbell IRQ: %d\n", err);
return err;
}
hsp->irq = err;
hsp->mbox.of_xlate = of_tegra_hsp_xlate;
hsp->mbox.num_chans = 32;
hsp->mbox.dev = &pdev->dev;
hsp->mbox.txdone_irq = false;
hsp->mbox.txdone_poll = false;
hsp->mbox.ops = &tegra_hsp_doorbell_ops;
hsp->mbox.chans = devm_kcalloc(&pdev->dev, hsp->mbox.num_chans,
sizeof(*hsp->mbox.chans),
GFP_KERNEL);
if (!hsp->mbox.chans)
return -ENOMEM;
err = tegra_hsp_add_doorbells(hsp);
if (err < 0) {
dev_err(&pdev->dev, "failed to add doorbells: %d\n", err);
return err;
}
platform_set_drvdata(pdev, hsp);
err = mbox_controller_register(&hsp->mbox);
if (err) {
dev_err(&pdev->dev, "failed to register mailbox: %d\n", err);
tegra_hsp_remove_doorbells(hsp);
return err;
}
err = devm_request_irq(&pdev->dev, hsp->irq, tegra_hsp_doorbell_irq,
IRQF_NO_SUSPEND, dev_name(&pdev->dev), hsp);
if (err < 0) {
dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n",
hsp->irq, err);
return err;
}
return 0;
}
static int tegra_hsp_remove(struct platform_device *pdev)
{
struct tegra_hsp *hsp = platform_get_drvdata(pdev);
mbox_controller_unregister(&hsp->mbox);
tegra_hsp_remove_doorbells(hsp);
return 0;
}
static const struct tegra_hsp_db_map tegra186_hsp_db_map[] = {
{ "ccplex", TEGRA_HSP_DB_MASTER_CCPLEX, HSP_DB_CCPLEX, },
{ "bpmp", TEGRA_HSP_DB_MASTER_BPMP, HSP_DB_BPMP, },
{ /* sentinel */ }
};
static const struct tegra_hsp_soc tegra186_hsp_soc = {
.map = tegra186_hsp_db_map,
};
static const struct of_device_id tegra_hsp_match[] = {
{ .compatible = "nvidia,tegra186-hsp", .data = &tegra186_hsp_soc },
{ }
};
static struct platform_driver tegra_hsp_driver = {
.driver = {
.name = "tegra-hsp",
.of_match_table = tegra_hsp_match,
},
.probe = tegra_hsp_probe,
.remove = tegra_hsp_remove,
};
static int __init tegra_hsp_init(void)
{
return platform_driver_register(&tegra_hsp_driver);
}
core_initcall(tegra_hsp_init);
......@@ -77,5 +77,19 @@ config ARCH_TEGRA_210_SOC
controllers, such as GPIO, I2C, SPI, SDHCI, PCIe, SATA and XHCI, to
name only a few.
config ARCH_TEGRA_186_SOC
bool "NVIDIA Tegra186 SoC"
select MAILBOX
select TEGRA_BPMP
select TEGRA_HSP_MBOX
select TEGRA_IVC
help
Enable support for the NVIDIA Tegar186 SoC. The Tegra186 features a
combination of Denver and Cortex-A57 CPU cores and a GPU based on
the Pascal architecture. It contains an ADSP with a Cortex-A9 CPU
used for audio processing, hardware video encoders/decoders with
multi-format support, ISP for image capture processing and BPMP for
power management.
endif
endif
此差异已折叠。
/*
* This header provides constants for binding nvidia,tegra186-hsp.
*/
#ifndef _DT_BINDINGS_MAILBOX_TEGRA186_HSP_H
#define _DT_BINDINGS_MAILBOX_TEGRA186_HSP_H
/*
* These define the type of mailbox that is to be used (doorbell, shared
* mailbox, shared semaphore or arbitrated semaphore).
*/
#define TEGRA_HSP_MBOX_TYPE_DB 0x0
#define TEGRA_HSP_MBOX_TYPE_SM 0x1
#define TEGRA_HSP_MBOX_TYPE_SS 0x2
#define TEGRA_HSP_MBOX_TYPE_AS 0x3
/*
* These defines represent the bit associated with the given master ID in the
* doorbell registers.
*/
#define TEGRA_HSP_DB_MASTER_CCPLEX 17
#define TEGRA_HSP_DB_MASTER_BPMP 19
#endif
/*
* Copyright (c) 2015-2016, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _DT_BINDINGS_POWER_TEGRA186_POWERGATE_H
#define _DT_BINDINGS_POWER_TEGRA186_POWERGATE_H
#define TEGRA186_POWER_DOMAIN_AUD 0
#define TEGRA186_POWER_DOMAIN_DFD 1
#define TEGRA186_POWER_DOMAIN_DISP 2
#define TEGRA186_POWER_DOMAIN_DISPB 3
#define TEGRA186_POWER_DOMAIN_DISPC 4
#define TEGRA186_POWER_DOMAIN_ISPA 5
#define TEGRA186_POWER_DOMAIN_NVDEC 6
#define TEGRA186_POWER_DOMAIN_NVJPG 7
#define TEGRA186_POWER_DOMAIN_MPE 8
#define TEGRA186_POWER_DOMAIN_PCX 9
#define TEGRA186_POWER_DOMAIN_SAX 10
#define TEGRA186_POWER_DOMAIN_VE 11
#define TEGRA186_POWER_DOMAIN_VIC 12
#define TEGRA186_POWER_DOMAIN_XUSBA 13
#define TEGRA186_POWER_DOMAIN_XUSBB 14
#define TEGRA186_POWER_DOMAIN_XUSBC 15
#define TEGRA186_POWER_DOMAIN_GPU 43
#define TEGRA186_POWER_DOMAIN_MAX 44
#endif
/*
* Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _ABI_MACH_T186_RESET_T186_H_
#define _ABI_MACH_T186_RESET_T186_H_
#define TEGRA186_RESET_ACTMON 0
#define TEGRA186_RESET_AFI 1
#define TEGRA186_RESET_CEC 2
#define TEGRA186_RESET_CSITE 3
#define TEGRA186_RESET_DP2 4
#define TEGRA186_RESET_DPAUX 5
#define TEGRA186_RESET_DSI 6
#define TEGRA186_RESET_DSIB 7
#define TEGRA186_RESET_DTV 8
#define TEGRA186_RESET_DVFS 9
#define TEGRA186_RESET_ENTROPY 10
#define TEGRA186_RESET_EXTPERIPH1 11
#define TEGRA186_RESET_EXTPERIPH2 12
#define TEGRA186_RESET_EXTPERIPH3 13
#define TEGRA186_RESET_GPU 14
#define TEGRA186_RESET_HDA 15
#define TEGRA186_RESET_HDA2CODEC_2X 16
#define TEGRA186_RESET_HDA2HDMICODEC 17
#define TEGRA186_RESET_HOST1X 18
#define TEGRA186_RESET_I2C1 19
#define TEGRA186_RESET_I2C2 20
#define TEGRA186_RESET_I2C3 21
#define TEGRA186_RESET_I2C4 22
#define TEGRA186_RESET_I2C5 23
#define TEGRA186_RESET_I2C6 24
#define TEGRA186_RESET_ISP 25
#define TEGRA186_RESET_KFUSE 26
#define TEGRA186_RESET_LA 27
#define TEGRA186_RESET_MIPI_CAL 28
#define TEGRA186_RESET_PCIE 29
#define TEGRA186_RESET_PCIEXCLK 30
#define TEGRA186_RESET_SATA 31
#define TEGRA186_RESET_SATACOLD 32
#define TEGRA186_RESET_SDMMC1 33
#define TEGRA186_RESET_SDMMC2 34
#define TEGRA186_RESET_SDMMC3 35
#define TEGRA186_RESET_SDMMC4 36
#define TEGRA186_RESET_SE 37
#define TEGRA186_RESET_SOC_THERM 38
#define TEGRA186_RESET_SOR0 39
#define TEGRA186_RESET_SPI1 40
#define TEGRA186_RESET_SPI2 41
#define TEGRA186_RESET_SPI3 42
#define TEGRA186_RESET_SPI4 43
#define TEGRA186_RESET_TMR 44
#define TEGRA186_RESET_TRIG_SYS 45
#define TEGRA186_RESET_TSEC 46
#define TEGRA186_RESET_UARTA 47
#define TEGRA186_RESET_UARTB 48
#define TEGRA186_RESET_UARTC 49
#define TEGRA186_RESET_UARTD 50
#define TEGRA186_RESET_VI 51
#define TEGRA186_RESET_VIC 52
#define TEGRA186_RESET_XUSB_DEV 53
#define TEGRA186_RESET_XUSB_HOST 54
#define TEGRA186_RESET_XUSB_PADCTL 55
#define TEGRA186_RESET_XUSB_SS 56
#define TEGRA186_RESET_AON_APB 57
#define TEGRA186_RESET_AXI_CBB 58
#define TEGRA186_RESET_BPMP_APB 59
#define TEGRA186_RESET_CAN1 60
#define TEGRA186_RESET_CAN2 61
#define TEGRA186_RESET_DMIC5 62
#define TEGRA186_RESET_DSIC 63
#define TEGRA186_RESET_DSID 64
#define TEGRA186_RESET_EMC_EMC 65
#define TEGRA186_RESET_EMC_MEM 66
#define TEGRA186_RESET_EMCSB_EMC 67
#define TEGRA186_RESET_EMCSB_MEM 68
#define TEGRA186_RESET_EQOS 69
#define TEGRA186_RESET_GPCDMA 70
#define TEGRA186_RESET_GPIO_CTL0 71
#define TEGRA186_RESET_GPIO_CTL1 72
#define TEGRA186_RESET_GPIO_CTL2 73
#define TEGRA186_RESET_GPIO_CTL3 74
#define TEGRA186_RESET_GPIO_CTL4 75
#define TEGRA186_RESET_GPIO_CTL5 76
#define TEGRA186_RESET_I2C10 77
#define TEGRA186_RESET_I2C12 78
#define TEGRA186_RESET_I2C13 79
#define TEGRA186_RESET_I2C14 80
#define TEGRA186_RESET_I2C7 81
#define TEGRA186_RESET_I2C8 82
#define TEGRA186_RESET_I2C9 83
#define TEGRA186_RESET_JTAG2AXI 84
#define TEGRA186_RESET_MPHY_IOBIST 85
#define TEGRA186_RESET_MPHY_L0_RX 86
#define TEGRA186_RESET_MPHY_L0_TX 87
#define TEGRA186_RESET_NVCSI 88
#define TEGRA186_RESET_NVDISPLAY0_HEAD0 89
#define TEGRA186_RESET_NVDISPLAY0_HEAD1 90
#define TEGRA186_RESET_NVDISPLAY0_HEAD2 91
#define TEGRA186_RESET_NVDISPLAY0_MISC 92
#define TEGRA186_RESET_NVDISPLAY0_WGRP0 93
#define TEGRA186_RESET_NVDISPLAY0_WGRP1 94
#define TEGRA186_RESET_NVDISPLAY0_WGRP2 95
#define TEGRA186_RESET_NVDISPLAY0_WGRP3 96
#define TEGRA186_RESET_NVDISPLAY0_WGRP4 97
#define TEGRA186_RESET_NVDISPLAY0_WGRP5 98
#define TEGRA186_RESET_PWM1 99
#define TEGRA186_RESET_PWM2 100
#define TEGRA186_RESET_PWM3 101
#define TEGRA186_RESET_PWM4 102
#define TEGRA186_RESET_PWM5 103
#define TEGRA186_RESET_PWM6 104
#define TEGRA186_RESET_PWM7 105
#define TEGRA186_RESET_PWM8 106
#define TEGRA186_RESET_SCE_APB 107
#define TEGRA186_RESET_SOR1 108
#define TEGRA186_RESET_TACH 109
#define TEGRA186_RESET_TSC 110
#define TEGRA186_RESET_UARTF 111
#define TEGRA186_RESET_UARTG 112
#define TEGRA186_RESET_UFSHC 113
#define TEGRA186_RESET_UFSHC_AXI_M 114
#define TEGRA186_RESET_UPHY 115
#define TEGRA186_RESET_ADSP 116
#define TEGRA186_RESET_ADSPDBG 117
#define TEGRA186_RESET_ADSPINTF 118
#define TEGRA186_RESET_ADSPNEON 119
#define TEGRA186_RESET_ADSPPERIPH 120
#define TEGRA186_RESET_ADSPSCU 121
#define TEGRA186_RESET_ADSPWDT 122
#define TEGRA186_RESET_APE 123
#define TEGRA186_RESET_DPAUX1 124
#define TEGRA186_RESET_NVDEC 125
#define TEGRA186_RESET_NVENC 126
#define TEGRA186_RESET_NVJPG 127
#define TEGRA186_RESET_PEX_USB_UPHY 128
#define TEGRA186_RESET_QSPI 129
#define TEGRA186_RESET_TSECB 130
#define TEGRA186_RESET_VI_I2C 131
#define TEGRA186_RESET_UARTE 132
#define TEGRA186_RESET_TOP_GTE 133
#define TEGRA186_RESET_SHSP 134
#define TEGRA186_RESET_PEX_USB_UPHY_L5 135
#define TEGRA186_RESET_PEX_USB_UPHY_L4 136
#define TEGRA186_RESET_PEX_USB_UPHY_L3 137
#define TEGRA186_RESET_PEX_USB_UPHY_L2 138
#define TEGRA186_RESET_PEX_USB_UPHY_L1 139
#define TEGRA186_RESET_PEX_USB_UPHY_L0 140
#define TEGRA186_RESET_PEX_USB_UPHY_PLL1 141
#define TEGRA186_RESET_PEX_USB_UPHY_PLL0 142
#define TEGRA186_RESET_TSCTNVI 143
#define TEGRA186_RESET_EXTPERIPH4 144
#define TEGRA186_RESET_DSIPADCTL 145
#define TEGRA186_RESET_AUD_MCLK 146
#define TEGRA186_RESET_MPHY_CLK_CTL 147
#define TEGRA186_RESET_MPHY_L1_RX 148
#define TEGRA186_RESET_MPHY_L1_TX 149
#define TEGRA186_RESET_UFSHC_LP 150
#define TEGRA186_RESET_BPMP_NIC 151
#define TEGRA186_RESET_BPMP_NSYSPORESET 152
#define TEGRA186_RESET_BPMP_NRESET 153
#define TEGRA186_RESET_BPMP_DBGRESETN 154
#define TEGRA186_RESET_BPMP_PRESETDBGN 155
#define TEGRA186_RESET_BPMP_PM 156
#define TEGRA186_RESET_BPMP_CVC 157
#define TEGRA186_RESET_BPMP_DMA 158
#define TEGRA186_RESET_BPMP_HSP 159
#define TEGRA186_RESET_TSCTNBPMP 160
#define TEGRA186_RESET_BPMP_TKE 161
#define TEGRA186_RESET_BPMP_GTE 162
#define TEGRA186_RESET_BPMP_PM_ACTMON 163
#define TEGRA186_RESET_AON_NIC 164
#define TEGRA186_RESET_AON_NSYSPORESET 165
#define TEGRA186_RESET_AON_NRESET 166
#define TEGRA186_RESET_AON_DBGRESETN 167
#define TEGRA186_RESET_AON_PRESETDBGN 168
#define TEGRA186_RESET_AON_ACTMON 169
#define TEGRA186_RESET_AOPM 170
#define TEGRA186_RESET_AOVC 171
#define TEGRA186_RESET_AON_DMA 172
#define TEGRA186_RESET_AON_GPIO 173
#define TEGRA186_RESET_AON_HSP 174
#define TEGRA186_RESET_TSCTNAON 175
#define TEGRA186_RESET_AON_TKE 176
#define TEGRA186_RESET_AON_GTE 177
#define TEGRA186_RESET_SCE_NIC 178
#define TEGRA186_RESET_SCE_NSYSPORESET 179
#define TEGRA186_RESET_SCE_NRESET 180
#define TEGRA186_RESET_SCE_DBGRESETN 181
#define TEGRA186_RESET_SCE_PRESETDBGN 182
#define TEGRA186_RESET_SCE_ACTMON 183
#define TEGRA186_RESET_SCE_PM 184
#define TEGRA186_RESET_SCE_DMA 185
#define TEGRA186_RESET_SCE_HSP 186
#define TEGRA186_RESET_TSCTNSCE 187
#define TEGRA186_RESET_SCE_TKE 188
#define TEGRA186_RESET_SCE_GTE 189
#define TEGRA186_RESET_SCE_CFG 190
#define TEGRA186_RESET_ADSP_ALL 191
/** @brief controls the power up/down sequence of UFSHC PSW partition. Controls LP_PWR_READY, LP_ISOL_EN, and LP_RESET_N signals */
#define TEGRA186_RESET_UFSHC_LP_SEQ 192
#define TEGRA186_RESET_SIZE 193
#endif
此差异已折叠。
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#ifndef __SOC_TEGRA_BPMP_H
#define __SOC_TEGRA_BPMP_H
#include <linux/mailbox_client.h>
#include <linux/reset-controller.h>
#include <linux/semaphore.h>
#include <linux/types.h>
#include <soc/tegra/bpmp-abi.h>
struct tegra_bpmp_clk;
struct tegra_bpmp_soc {
struct {
struct {
unsigned int offset;
unsigned int count;
unsigned int timeout;
} cpu_tx, thread, cpu_rx;
} channels;
unsigned int num_resets;
};
struct tegra_bpmp_mb_data {
u32 code;
u32 flags;
u8 data[MSG_DATA_MIN_SZ];
} __packed;
struct tegra_bpmp_channel {
struct tegra_bpmp *bpmp;
struct tegra_bpmp_mb_data *ib;
struct tegra_bpmp_mb_data *ob;
struct completion completion;
struct tegra_ivc *ivc;
};
typedef void (*tegra_bpmp_mrq_handler_t)(unsigned int mrq,
struct tegra_bpmp_channel *channel,
void *data);
struct tegra_bpmp_mrq {
struct list_head list;
unsigned int mrq;
tegra_bpmp_mrq_handler_t handler;
void *data;
};
struct tegra_bpmp {
const struct tegra_bpmp_soc *soc;
struct device *dev;
struct {
struct gen_pool *pool;
dma_addr_t phys;
void *virt;
} tx, rx;
struct {
struct mbox_client client;
struct mbox_chan *channel;
} mbox;
struct tegra_bpmp_channel *channels;
unsigned int num_channels;
struct {
unsigned long *allocated;
unsigned long *busy;
unsigned int count;
struct semaphore lock;
} threaded;
struct list_head mrqs;
spinlock_t lock;
struct tegra_bpmp_clk **clocks;
unsigned int num_clocks;
struct reset_controller_dev rstc;
};
struct tegra_bpmp *tegra_bpmp_get(struct device *dev);
void tegra_bpmp_put(struct tegra_bpmp *bpmp);
struct tegra_bpmp_message {
unsigned int mrq;
struct {
const void *data;
size_t size;
} tx;
struct {
void *data;
size_t size;
} rx;
};
int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg);
int tegra_bpmp_transfer(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg);
int tegra_bpmp_request_mrq(struct tegra_bpmp *bpmp, unsigned int mrq,
tegra_bpmp_mrq_handler_t handler, void *data);
void tegra_bpmp_free_mrq(struct tegra_bpmp *bpmp, unsigned int mrq,
void *data);
#if IS_ENABLED(CONFIG_CLK_TEGRA_BPMP)
int tegra_bpmp_init_clocks(struct tegra_bpmp *bpmp);
#else
static inline int tegra_bpmp_init_clocks(struct tegra_bpmp *bpmp)
{
return 0;
}
#endif
#if IS_ENABLED(CONFIG_RESET_TEGRA_BPMP)
int tegra_bpmp_init_resets(struct tegra_bpmp *bpmp);
#else
static inline int tegra_bpmp_init_resets(struct tegra_bpmp *bpmp)
{
return 0;
}
#endif
#endif /* __SOC_TEGRA_BPMP_H */
/*
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#ifndef __TEGRA_IVC_H
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/types.h>
struct tegra_ivc_header;
struct tegra_ivc {
struct device *peer;
struct {
struct tegra_ivc_header *channel;
unsigned int position;
dma_addr_t phys;
} rx, tx;
void (*notify)(struct tegra_ivc *ivc, void *data);
void *notify_data;
unsigned int num_frames;
size_t frame_size;
};
/**
* tegra_ivc_read_get_next_frame - Peek at the next frame to receive
* @ivc pointer of the IVC channel
*
* Peek at the next frame to be received, without removing it from
* the queue.
*
* Returns a pointer to the frame, or an error encoded pointer.
*/
void *tegra_ivc_read_get_next_frame(struct tegra_ivc *ivc);
/**
* tegra_ivc_read_advance - Advance the read queue
* @ivc pointer of the IVC channel
*
* Advance the read queue
*
* Returns 0, or a negative error value if failed.
*/
int tegra_ivc_read_advance(struct tegra_ivc *ivc);
/**
* tegra_ivc_write_get_next_frame - Poke at the next frame to transmit
* @ivc pointer of the IVC channel
*
* Get access to the next frame.
*
* Returns a pointer to the frame, or an error encoded pointer.
*/
void *tegra_ivc_write_get_next_frame(struct tegra_ivc *ivc);
/**
* tegra_ivc_write_advance - Advance the write queue
* @ivc pointer of the IVC channel
*
* Advance the write queue
*
* Returns 0, or a negative error value if failed.
*/
int tegra_ivc_write_advance(struct tegra_ivc *ivc);
/**
* tegra_ivc_notified - handle internal messages
* @ivc pointer of the IVC channel
*
* This function must be called following every notification.
*
* Returns 0 if the channel is ready for communication, or -EAGAIN if a channel
* reset is in progress.
*/
int tegra_ivc_notified(struct tegra_ivc *ivc);
/**
* tegra_ivc_reset - initiates a reset of the shared memory state
* @ivc pointer of the IVC channel
*
* This function must be called after a channel is reserved before it is used
* for communication. The channel will be ready for use when a subsequent call
* to notify the remote of the channel reset.
*/
void tegra_ivc_reset(struct tegra_ivc *ivc);
size_t tegra_ivc_align(size_t size);
unsigned tegra_ivc_total_queue_size(unsigned queue_size);
int tegra_ivc_init(struct tegra_ivc *ivc, struct device *peer, void *rx,
dma_addr_t rx_phys, void *tx, dma_addr_t tx_phys,
unsigned int num_frames, size_t frame_size,
void (*notify)(struct tegra_ivc *ivc, void *data),
void *data);
void tegra_ivc_cleanup(struct tegra_ivc *ivc);
#endif /* __TEGRA_IVC_H */
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册