Essentially, a neural network is a compute graph. T data needed for the computation is stored in `Tensor`s and its computation procedure is described by `Operator`s. An `Operator` calls the `Compute` interface in its corresponding `OpKernel` and operates on the `Tensor`.
### Eigen Tensor Module
The Eigen Tensor module supports powerful element-wise computation. In addition, a piece of code written using it can be run on both the CPU and the GPU.
Note that Eigen Tensor is still being actively developed, so its tests are not completely covered and its documentation may be sparse.
For details on Eigen Tensor module, please see [doc 1](https://github.com/RLovelett/eigen/blob/master/unsupported/Eigen/CXX11/src/Tensor/README.md) and [doc 2](https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md).
### paddle::framework::Tensor
Paddle Tensor's is defined in the framework directory with the following interface:
```cpp
class Tensor {
public:
/*! Return a pointer to mutable memory block. */
template <typename T>
inline T* data();
/**
* @brief Return a pointer to mutable memory block.
* @note If not exist, then allocation.
*/
template <typename T>
inline T* mutable_data(platform::Place place);
/**
* @brief Return a pointer to mutable memory block.
*
* @param[in] dims The dimensions of the memory block.
`Placeholder` is used to delay memory allocation; that is, we can first define a tensor, using `Resize` to configure its shape, and then call `mutuable_data` to allocate the actual memory.
```cpp
paddle::framework::Tensor t;
paddle::platform::CPUPlace place;
// set size first
t.Resize({2, 3});
// allocate memory on CPU later
t.mutable_data(place);
```
### paddle::framework::Tensor Usage
`AddOp` demonstrates Tensor's usage.
- InferShape
When computing a neural network's compute graph, first call every `Operator`'s `InferShape` method, and use `Resize` to configure the size of the output tensor.
As shown above, in actual computation, we need to transform the input and output `Tensor`s into formats Eigen supports. We show some functions in [eigen.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen.h) to implement the transformation from `paddle::framework::Tensor`to `EigenTensor/EigenMatrix/EigenVector/EigenScalar`.
Using EigenTensor as an example:
```cpp
Tensor t;
float* p = t.mutable_data<float>(make_ddim({1, 2, 3}), platform::CPUPlace());
for (int i = 0; i < 1 * 2 * 3; i++) {
p[i] = static_cast<float>(i);
}
EigenTensor<float, 3>::Type et = EigenTensor<float, 3>::From(t);
```
`From` is an interfacing method provided by the EigenTensor template, which implements the transformation from a `paddle::framework::Tensor` object to an EigenTensor. Since `rank` is a template parameter, it needs to be explicitly specified at the time of the transformation.
In Eigen, tensors with different ranks are different types, with `Vector` bring a rank-1 instance. Note that `EigenVector<T>::From` uses a transformation from an 1-dimensional Paddle tensor to a 1-dimensional Eigen tensor while `EigenVector<T>::Flatten` reshapes a paddle tensor and flattens it into a 1-dimensional Eigen tensor. Both resulting tensors are still typed EigenVector.
For more transformations, see the [unit tests](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen_test.cc) in the `eigen_test.cc` file.
### Implementing Computation
While computing, the device interface is needed from the EigenTensors on the left hand side of the assignments. Note that the computation between EigenTensors only changes the data originally inthe Tensor and does not change all the shape information associated with the Tensor.
```cpp
auto x = EigenVector<T>::Flatten(*input0);
auto y = EigenVector<T>::Flatten(*input1);
auto z = EigenVector<T>::Flatten(*output);
auto place = context.GetEigenDevice<Place>();
z.device(place) = x + y;
```
In this code segment, input0/input1/output can be Tensors of arbitrary dimension. We are calling Flatten from EigenVector, transforming a tensor of any dimension into a 1-dimensional EigenVector. After completing computation, input0/input1/output will retain the same shape information, and they can be resized using the `Resize` interface.
Because the Eigen Tensor module is under-documented, please refer to `OpKernel`'s computation code in TensorFlow's [kernel module documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/kernels).
<liclass="toctree-l2"><aclass="reference internal"href="../../getstarted/build_and_install/index_en.html">Install and Build</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="../../getstarted/build_and_install/docker_install_en.html">PaddlePaddle in Docker Containers</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="../../getstarted/build_and_install/build_from_source_en.html">Installing from Sources</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../usage/k8s/k8s_en.html">Paddle On Kubernetes</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="../usage/k8s/k8s_aws_en.html">Distributed PaddlePaddle Training on AWS with Kubernetes</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="build_en.html">Build PaddlePaddle from Source Code and Run Unit Test</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="new_layer_en.html">Write New Layers</a></li>
<spanid="how-to-use-eigen-in-paddle"></span><h1>How to use Eigen in Paddle<aclass="headerlink"href="#how-to-use-eigen-in-paddle"title="Permalink to this headline">¶</a></h1>
<p>Essentially, a neural network is a compute graph. T data needed for the computation is stored in <codeclass="docutils literal"><spanclass="pre">Tensor</span></code>s and its computation procedure is described by <codeclass="docutils literal"><spanclass="pre">Operator</span></code>s. An <codeclass="docutils literal"><spanclass="pre">Operator</span></code> calls the <codeclass="docutils literal"><spanclass="pre">Compute</span></code> interface in its corresponding <codeclass="docutils literal"><spanclass="pre">OpKernel</span></code> and operates on the <codeclass="docutils literal"><spanclass="pre">Tensor</span></code>.</p>
<divclass="section"id="eigen-tensor-module">
<spanid="eigen-tensor-module"></span><h2>Eigen Tensor Module<aclass="headerlink"href="#eigen-tensor-module"title="Permalink to this headline">¶</a></h2>
<p>The Eigen Tensor module supports powerful element-wise computation. In addition, a piece of code written using it can be run on both the CPU and the GPU.</p>
<p>Note that Eigen Tensor is still being actively developed, so its tests are not completely covered and its documentation may be sparse.</p>
<p>For details on Eigen Tensor module, please see <aclass="reference external"href="https://github.com/RLovelett/eigen/blob/master/unsupported/Eigen/CXX11/src/Tensor/README.md">doc 1</a> and <aclass="reference external"href="https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md">doc 2</a>.</p>
</div>
<divclass="section"id="paddle-framework-tensor">
<spanid="paddle-framework-tensor"></span><h2>paddle::framework::Tensor<aclass="headerlink"href="#paddle-framework-tensor"title="Permalink to this headline">¶</a></h2>
<p>Paddle Tensor’s is defined in the framework directory with the following interface:</p>
<p><codeclass="docutils literal"><spanclass="pre">Placeholder</span></code> is used to delay memory allocation; that is, we can first define a tensor, using <codeclass="docutils literal"><spanclass="pre">Resize</span></code> to configure its shape, and then call <codeclass="docutils literal"><spanclass="pre">mutuable_data</span></code> to allocate the actual memory.</p>
<spanid="paddle-framework-tensor-usage"></span><h2>paddle::framework::Tensor Usage<aclass="headerlink"href="#paddle-framework-tensor-usage"title="Permalink to this headline">¶</a></h2>
<p>When computing a neural network’s compute graph, first call every <codeclass="docutils literal"><spanclass="pre">Operator</span></code>‘s <codeclass="docutils literal"><spanclass="pre">InferShape</span></code> method, and use <codeclass="docutils literal"><spanclass="pre">Resize</span></code> to configure the size of the output tensor.</p>
<spanid="paddle-framework-tensoreigentensor"></span><h2>paddle::framework::Tensor到EigenTensor的转换<aclass="headerlink"href="#paddle-framework-tensoreigentensor"title="Permalink to this headline">¶</a></h2>
<p>As shown above, in actual computation, we need to transform the input and output <codeclass="docutils literal"><spanclass="pre">Tensor</span></code>s into formats Eigen supports. We show some functions in <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen.h">eigen.h</a> to implement the transformation from <codeclass="docutils literal"><spanclass="pre">paddle::framework::Tensor</span></code>to <codeclass="docutils literal"><spanclass="pre">EigenTensor/EigenMatrix/EigenVector/EigenScalar</span></code>.</p>
<p><codeclass="docutils literal"><spanclass="pre">From</span></code> is an interfacing method provided by the EigenTensor template, which implements the transformation from a <codeclass="docutils literal"><spanclass="pre">paddle::framework::Tensor</span></code> object to an EigenTensor. Since <codeclass="docutils literal"><spanclass="pre">rank</span></code> is a template parameter, it needs to be explicitly specified at the time of the transformation.</p>
<p>In Eigen, tensors with different ranks are different types, with <codeclass="docutils literal"><spanclass="pre">Vector</span></code> bring a rank-1 instance. Note that <codeclass="docutils literal"><spanclass="pre">EigenVector<T>::From</span></code> uses a transformation from an 1-dimensional Paddle tensor to a 1-dimensional Eigen tensor while <codeclass="docutils literal"><spanclass="pre">EigenVector<T>::Flatten</span></code> reshapes a paddle tensor and flattens it into a 1-dimensional Eigen tensor. Both resulting tensors are still typed EigenVector.</p>
<p>For more transformations, see the <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/eigen_test.cc">unit tests</a> in the <codeclass="docutils literal"><spanclass="pre">eigen_test.cc</span></code> file.</p>
</div>
<divclass="section"id="implementing-computation">
<spanid="implementing-computation"></span><h2>Implementing Computation<aclass="headerlink"href="#implementing-computation"title="Permalink to this headline">¶</a></h2>
<p>While computing, the device interface is needed from the EigenTensors on the left hand side of the assignments. Note that the computation between EigenTensors only changes the data originally inthe Tensor and does not change all the shape information associated with the Tensor.</p>
<p>In this code segment, input0/input1/output can be Tensors of arbitrary dimension. We are calling Flatten from EigenVector, transforming a tensor of any dimension into a 1-dimensional EigenVector. After completing computation, input0/input1/output will retain the same shape information, and they can be resized using the <codeclass="docutils literal"><spanclass="pre">Resize</span></code> interface.</p>
<p>Because the Eigen Tensor module is under-documented, please refer to <codeclass="docutils literal"><spanclass="pre">OpKernel</span></code>‘s computation code in TensorFlow’s <aclass="reference external"href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/kernels">kernel module documentation</a>.</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.