提交 c5a3067a 编写于 作者: T Travis CI

Deploy to GitHub Pages: cd775a13

上级 f2cb48f1
......@@ -105,18 +105,10 @@ There are two ways to execute a Fluid program. When a program is executed, it c
There is a C++ class [`Executor`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h), which runs a `ProgramDesc`, similar to how an interpreter runs a Python program.
Fluid is moving towards the direction of a compiler, which is explain in more detail later in this article.
Fluid is moving towards the direction of a compiler, which is explain in [fluid_compiler.md](fluid_compiler.md).
## Backward Compatibility of Fluid
Given all the advantages from the removal of the concept of a *model*, hardware manufacturers might still prefer the existence of the concept of a model, so it would be easier for them to support multiple frameworks all at once and could run a trained model during inference. For example, Nervana, a startup company acquired by Intel, has been working on an XPU that reads the models in the format known as [n-graph](https://github.com/NervanaSystems/ngraph). Similarly, [Movidius](https://www.movidius.com/) is producing a mobile deep learning chip that reads and runs graphs of operators. The well-known [ONNX](https://github.com/onnx/onnx) is also a file format of graphs of operators.
For Fluid, we can write a converter that extracts the parts in the `ProgramDesc` protobuf message, converts them into a graph of operators, and exports the graph into the ONNX or n-graph format.
## Towards a Deep Learning Language and the Compiler
We can change the `if-then-else` and loop structure a little bit in the above Fluid example programs, to make it into a new programming language, different than Python.
Even if we do not invent a new language, as long as we get the `ProgramDesc` message filled in, we can write a transpiler, which translates each invocation to an operator, into a C++ call to a kernel function of that operator. For example, a transpiler that weaves the CUDA kernels outputs an NVIDIA-friendly C++ program, which can be built using `nvcc`. Another transpiler could generate MKL-friendly code that should be built using `icc` from Intel. More interestingly, we can translate a Fluid program into its distributed version of two `ProgramDesc` messages, one for running on the trainer process, and the other one for the parameter server. For more details of the last example, the [concurrent programming design](concurrent_programming.md) document would be a good pointer. The following figure explains the proposed two-stage process:
![](fluid-compiler.png)
# PaddlePaddle Fluid: Towards a Compiled Programming Language
As described in [fluid.md](fluid.md), when a Fluid application program
runs, it generates a `ProgramDesc` protobuf message as an intermediate
representation of itself. The C++ class `Executor` can run this
protobuf message as an interpreter. This article describes the Fluid
compiler.
![](fluid-compiler.png)
## ProgramDesc
Before we go deeper into the idea of compiled language, let us take a
look at a simple example Fluid application.
```python
import "fluid"
func paddlepaddle() {
X = fluid.read(...)
W = fluid.Tensor(...)
Y = fluid.mult(X, W)
}
```
This program consists of a [block](block.md) of three operators --
`read`, `assign`, and `mult`. Its `ProgramDesc` message looks like
the following
```protobuf
message ProgramDesc {
block[0] = Block {
vars = [X, W, Y],
ops = [
read(output = X)
assign(input = ..., output = W)
mult(input = {X, W}, output = Y)
],
}
}
```
## Transpilers
We can write a transpiler program that takes a `ProgramDesc`, e.g.,
the above one, and outputs another `ProgramDesc`. Let us take some
examples:
1. *Memory optimization transpiler*: We can write a transpiler that
inserts some `FreeMemoryOp`s in the above example `ProgramDesc` so
to free memory early, before the end of an iteration, so to keep a
small memory footprint.
1. *Distributed training transpiler*: We can write a transpiler that
converts a`ProgramDesc` into its distributed version of two
`ProgramDesc`s -- one for running by the trainer processes and the
other for the parameter server.
In the rest of this article, we talk about a special kind of
transpiler, *Native code generator*, which takes a `ProgramDesc` and
generates a `.cu` (or `.cc`) file, which could be built by C++
compilers (gcc, nvcc, icc) into binaries.
## Native Code Generator
For the above example, the native code generator transpiler, say, the
CUDA code generator, should generate a `main` function:
```c++
void main() {
auto X = fluid_cuda_read(...);
auto W = fluid_cuda_create_tensor(...);
auto Y = fluid_cuda_mult(X, W);
}
```
and the definitions of functions `fluid_cuda_read`,
`fluid_cuda_create_tensor`, and `fluid_cuda_mult`. Please be aware
that each function could just define a C++ instance of an operator and
run it. For example
```c++
paddle::Tensor fluid_cuda_read(...) {
paddle::Tensor t;
paddle::operator::Read r(&t, ...);
r.Run();
return t;
}
```
For computational operators that have multiple *kernels*, each for a
specific hardware platform, for example, the `mult` operator, the
generated code should call its CUDA kernel:
```c++
paddle::Tensor fluid_cuda_mult(const paddle::Tensor& a,
const paddle::Tensor& b) {
paddle::Tensor t;
paddle::operator::Mult m(a, b, ...);
Mult.Run(cuda_context);
}
```
where `cuda_context` could be a global variable of type
`paddle::CUDADeviceContext`.
## Multi-Block Code Generation
Most Fluid application programs may have more than one blocks. To
execute them, we need to trace [scopes](scope.md).
......@@ -299,19 +299,13 @@
<span id="the-execution-of-a-fluid-program"></span><h2>The Execution of a Fluid Program<a class="headerlink" href="#the-execution-of-a-fluid-program" title="Permalink to this headline"></a></h2>
<p>There are two ways to execute a Fluid program. When a program is executed, it creates a protobuf message <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/a91efdde6910ce92a78e3aa7157412c4c88d9ee8/paddle/framework/framework.proto#L145"><code class="docutils literal"><span class="pre">ProgramDesc</span></code></a> that describes the process and is conceptually like an <a class="reference external" href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">abstract syntax tree</a>.</p>
<p>There is a C++ class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h"><code class="docutils literal"><span class="pre">Executor</span></code></a>, which runs a <code class="docutils literal"><span class="pre">ProgramDesc</span></code>, similar to how an interpreter runs a Python program.</p>
<p>Fluid is moving towards the direction of a compiler, which is explain in more detail later in this article.</p>
<p>Fluid is moving towards the direction of a compiler, which is explain in <a class="reference internal" href="fluid_compiler.html"><span class="doc">fluid</span></a>.</p>
</div>
<div class="section" id="backward-compatibility-of-fluid">
<span id="backward-compatibility-of-fluid"></span><h2>Backward Compatibility of Fluid<a class="headerlink" href="#backward-compatibility-of-fluid" title="Permalink to this headline"></a></h2>
<p>Given all the advantages from the removal of the concept of a <em>model</em>, hardware manufacturers might still prefer the existence of the concept of a model, so it would be easier for them to support multiple frameworks all at once and could run a trained model during inference. For example, Nervana, a startup company acquired by Intel, has been working on an XPU that reads the models in the format known as <a class="reference external" href="https://github.com/NervanaSystems/ngraph">n-graph</a>. Similarly, <a class="reference external" href="https://www.movidius.com/">Movidius</a> is producing a mobile deep learning chip that reads and runs graphs of operators. The well-known <a class="reference external" href="https://github.com/onnx/onnx">ONNX</a> is also a file format of graphs of operators.</p>
<p>For Fluid, we can write a converter that extracts the parts in the <code class="docutils literal"><span class="pre">ProgramDesc</span></code> protobuf message, converts them into a graph of operators, and exports the graph into the ONNX or n-graph format.</p>
</div>
<div class="section" id="towards-a-deep-learning-language-and-the-compiler">
<span id="towards-a-deep-learning-language-and-the-compiler"></span><h2>Towards a Deep Learning Language and the Compiler<a class="headerlink" href="#towards-a-deep-learning-language-and-the-compiler" title="Permalink to this headline"></a></h2>
<p>We can change the <code class="docutils literal"><span class="pre">if-then-else</span></code> and loop structure a little bit in the above Fluid example programs, to make it into a new programming language, different than Python.</p>
<p>Even if we do not invent a new language, as long as we get the <code class="docutils literal"><span class="pre">ProgramDesc</span></code> message filled in, we can write a transpiler, which translates each invocation to an operator, into a C++ call to a kernel function of that operator. For example, a transpiler that weaves the CUDA kernels outputs an NVIDIA-friendly C++ program, which can be built using <code class="docutils literal"><span class="pre">nvcc</span></code>. Another transpiler could generate MKL-friendly code that should be built using <code class="docutils literal"><span class="pre">icc</span></code> from Intel. More interestingly, we can translate a Fluid program into its distributed version of two <code class="docutils literal"><span class="pre">ProgramDesc</span></code> messages, one for running on the trainer process, and the other one for the parameter server. For more details of the last example, the <a class="reference internal" href="concurrent_programming.html"><span class="doc">concurrent programming design</span></a> document would be a good pointer. The following figure explains the proposed two-stage process:</p>
<p><img alt="" src="../_images/fluid-compiler.png" /></p>
</div>
</div>
......
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>PaddlePaddle Fluid: Towards a Compiled Programming Language &mdash; PaddlePaddle documentation</title>
<link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
<link rel="index" title="Index"
href="../genindex.html"/>
<link rel="search" title="Search" href="../search.html"/>
<link rel="top" title="PaddlePaddle documentation" href="../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Fork me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../getstarted/index_en.html">GET STARTED</a></li>
<li class="toctree-l1"><a class="reference internal" href="../howto/index_en.html">HOW TO</a></li>
<li class="toctree-l1"><a class="reference internal" href="../api/index_en.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../mobile/index_en.html">MOBILE</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../getstarted/index_en.html">GET STARTED</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../getstarted/build_and_install/index_en.html">Install and Build</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../getstarted/build_and_install/pip_install_en.html">Install Using pip</a></li>
<li class="toctree-l3"><a class="reference internal" href="../getstarted/build_and_install/docker_install_en.html">Run in Docker Containers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/dev/build_en.html">Build using Docker</a></li>
<li class="toctree-l3"><a class="reference internal" href="../getstarted/build_and_install/build_from_source_en.html">Build from Sources</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../howto/index_en.html">HOW TO</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../howto/usage/cmd_parameter/index_en.html">Set Command-line Parameters</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cmd_parameter/use_case_en.html">Use Case</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cmd_parameter/arguments_en.html">Argument Outline</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cmd_parameter/detail_introduction_en.html">Detail Description</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../howto/usage/cluster/cluster_train_en.html">Distributed Training</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/fabric_en.html">fabric</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/openmpi_en.html">openmpi</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/k8s_en.html">kubernetes</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/k8s_aws_en.html">kubernetes on AWS</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../howto/dev/new_layer_en.html">Write New Layers</a></li>
<li class="toctree-l2"><a class="reference internal" href="../howto/dev/contribute_to_paddle_en.html">Contribute Code</a></li>
<li class="toctree-l2"><a class="reference internal" href="../howto/dev/write_docs_en.html">Contribute Documentation</a></li>
<li class="toctree-l2"><a class="reference internal" href="../howto/deep_model/rnn/index_en.html">RNN Models</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../howto/deep_model/rnn/rnn_config_en.html">RNN Configuration</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../howto/optimization/gpu_profiling_en.html">Tune GPU Performance</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../api/index_en.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/model_configs.html">Model Configuration</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/data.html">Data Reader Interface and DataSets</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/data/data_reader.html">Data Reader Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/data/image.html">Image Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/data/dataset.html">Dataset</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/run_logic.html">Training and Inference</a></li>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/fluid.html">Fluid</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/layers.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/data_feeder.html">DataFeeder</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/executor.html">Executor</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/initializer.html">Initializer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/evaluator.html">Evaluator</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/nets.html">Nets</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/param_attr.html">ParamAttr</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/profiler.html">Profiler</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/regularizer.html">Regularizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/io.html">IO</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../mobile/index_en.html">MOBILE</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../mobile/cross_compiling_for_android_en.html">Build PaddlePaddle for Android</a></li>
<li class="toctree-l2"><a class="reference internal" href="../mobile/cross_compiling_for_ios_en.html">Build PaddlePaddle for iOS</a></li>
<li class="toctree-l2"><a class="reference internal" href="../mobile/cross_compiling_for_raspberry_en.html">Build PaddlePaddle for Raspberry Pi</a></li>
</ul>
</li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>PaddlePaddle Fluid: Towards a Compiled Programming Language</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="paddlepaddle-fluid-towards-a-compiled-programming-language">
<span id="paddlepaddle-fluid-towards-a-compiled-programming-language"></span><h1>PaddlePaddle Fluid: Towards a Compiled Programming Language<a class="headerlink" href="#paddlepaddle-fluid-towards-a-compiled-programming-language" title="Permalink to this headline"></a></h1>
<p>As described in <a class="reference internal" href="fluid.html"><span class="doc">fluid.md</span></a>, when a Fluid application program
runs, it generates a <code class="docutils literal"><span class="pre">ProgramDesc</span></code> protobuf message as an intermediate
representation of itself. The C++ class <code class="docutils literal"><span class="pre">Executor</span></code> can run this
protobuf message as an interpreter. This article describes the Fluid
compiler.</p>
<p><img alt="" src="../_images/fluid-compiler.png" /></p>
<div class="section" id="programdesc">
<span id="programdesc"></span><h2>ProgramDesc<a class="headerlink" href="#programdesc" title="Permalink to this headline"></a></h2>
<p>Before we go deeper into the idea of compiled language, let us take a
look at a simple example Fluid application.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="s2">&quot;fluid&quot;</span>
<span class="n">func</span> <span class="n">paddlepaddle</span><span class="p">()</span> <span class="p">{</span>
<span class="n">X</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">read</span><span class="p">(</span><span class="o">...</span><span class="p">)</span>
<span class="n">W</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Tensor</span><span class="p">(</span><span class="o">...</span><span class="p">)</span>
<span class="n">Y</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">mult</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">)</span>
<span class="p">}</span>
</pre></div>
</div>
<p>This program consists of a <a class="reference internal" href="block.html"><span class="doc">block</span></a> of three operators &#8211;
<code class="docutils literal"><span class="pre">read</span></code>, <code class="docutils literal"><span class="pre">assign</span></code>, and <code class="docutils literal"><span class="pre">mult</span></code>. Its <code class="docutils literal"><span class="pre">ProgramDesc</span></code> message looks like
the following</p>
<div class="highlight-protobuf"><div class="highlight"><pre><span></span><span class="kd">message</span> <span class="nc">ProgramDesc</span> <span class="p">{</span>
<span class="n">block</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="n">Block</span> <span class="p">{</span>
<span class="na">vars</span> <span class="o">=</span> <span class="p">[</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">,</span> <span class="n">Y</span><span class="p">],</span>
<span class="na">ops</span> <span class="o">=</span> <span class="p">[</span>
<span class="n">read</span><span class="p">(</span><span class="na">output</span> <span class="o">=</span> <span class="n">X</span><span class="p">)</span>
<span class="n">assign</span><span class="p">(</span><span class="na">input</span> <span class="o">=</span> <span class="o">...</span><span class="p">,</span> <span class="na">output</span> <span class="o">=</span> <span class="n">W</span><span class="p">)</span>
<span class="n">mult</span><span class="p">(</span><span class="na">input</span> <span class="o">=</span> <span class="p">{</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">},</span> <span class="na">output</span> <span class="o">=</span> <span class="n">Y</span><span class="p">)</span>
<span class="p">],</span>
<span class="p">}</span>
<span class="p">}</span>
</pre></div>
</div>
</div>
<div class="section" id="transpilers">
<span id="transpilers"></span><h2>Transpilers<a class="headerlink" href="#transpilers" title="Permalink to this headline"></a></h2>
<p>We can write a transpiler program that takes a <code class="docutils literal"><span class="pre">ProgramDesc</span></code>, e.g.,
the above one, and outputs another <code class="docutils literal"><span class="pre">ProgramDesc</span></code>. Let us take some
examples:</p>
<ol class="simple">
<li><em>Memory optimization transpiler</em>: We can write a transpiler that
inserts some <code class="docutils literal"><span class="pre">FreeMemoryOp</span></code>s in the above example <code class="docutils literal"><span class="pre">ProgramDesc</span></code> so
to free memory early, before the end of an iteration, so to keep a
small memory footprint.</li>
<li><em>Distributed training transpiler</em>: We can write a transpiler that
converts a<code class="docutils literal"><span class="pre">ProgramDesc</span></code> into its distributed version of two
<code class="docutils literal"><span class="pre">ProgramDesc</span></code>s &#8211; one for running by the trainer processes and the
other for the parameter server.</li>
</ol>
<p>In the rest of this article, we talk about a special kind of
transpiler, <em>Native code generator</em>, which takes a <code class="docutils literal"><span class="pre">ProgramDesc</span></code> and
generates a <code class="docutils literal"><span class="pre">.cu</span></code> (or <code class="docutils literal"><span class="pre">.cc</span></code>) file, which could be built by C++
compilers (gcc, nvcc, icc) into binaries.</p>
</div>
<div class="section" id="native-code-generator">
<span id="native-code-generator"></span><h2>Native Code Generator<a class="headerlink" href="#native-code-generator" title="Permalink to this headline"></a></h2>
<p>For the above example, the native code generator transpiler, say, the
CUDA code generator, should generate a <code class="docutils literal"><span class="pre">main</span></code> function:</p>
<div class="highlight-c++"><div class="highlight"><pre><span></span><span class="kt">void</span> <span class="nf">main</span><span class="p">()</span> <span class="p">{</span>
<span class="k">auto</span> <span class="n">X</span> <span class="o">=</span> <span class="n">fluid_cuda_read</span><span class="p">(...);</span>
<span class="k">auto</span> <span class="n">W</span> <span class="o">=</span> <span class="n">fluid_cuda_create_tensor</span><span class="p">(...);</span>
<span class="k">auto</span> <span class="n">Y</span> <span class="o">=</span> <span class="n">fluid_cuda_mult</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">);</span>
<span class="p">}</span>
</pre></div>
</div>
<p>and the definitions of functions <code class="docutils literal"><span class="pre">fluid_cuda_read</span></code>,
<code class="docutils literal"><span class="pre">fluid_cuda_create_tensor</span></code>, and <code class="docutils literal"><span class="pre">fluid_cuda_mult</span></code>. Please be aware
that each function could just define a C++ instance of an operator and
run it. For example</p>
<div class="highlight-c++"><div class="highlight"><pre><span></span><span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">fluid_cuda_read</span><span class="p">(...)</span> <span class="p">{</span>
<span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">t</span><span class="p">;</span>
<span class="n">paddle</span><span class="o">::</span><span class="k">operator</span><span class="o">::</span><span class="n">Read</span> <span class="n">r</span><span class="p">(</span><span class="o">&amp;</span><span class="n">t</span><span class="p">,</span> <span class="p">...);</span>
<span class="n">r</span><span class="p">.</span><span class="n">Run</span><span class="p">();</span>
<span class="k">return</span> <span class="n">t</span><span class="p">;</span>
<span class="p">}</span>
</pre></div>
</div>
<p>For computational operators that have multiple <em>kernels</em>, each for a
specific hardware platform, for example, the <code class="docutils literal"><span class="pre">mult</span></code> operator, the
generated code should call its CUDA kernel:</p>
<div class="highlight-c++"><div class="highlight"><pre><span></span><span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">fluid_cuda_mult</span><span class="p">(</span><span class="k">const</span> <span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span><span class="o">&amp;</span> <span class="n">a</span><span class="p">,</span>
<span class="k">const</span> <span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span><span class="o">&amp;</span> <span class="n">b</span><span class="p">)</span> <span class="p">{</span>
<span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">t</span><span class="p">;</span>
<span class="n">paddle</span><span class="o">::</span><span class="k">operator</span><span class="o">::</span><span class="n">Mult</span> <span class="n">m</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="p">...);</span>
<span class="n">Mult</span><span class="p">.</span><span class="n">Run</span><span class="p">(</span><span class="n">cuda_context</span><span class="p">);</span>
<span class="p">}</span>
</pre></div>
</div>
<p>where <code class="docutils literal"><span class="pre">cuda_context</span></code> could be a global variable of type
<code class="docutils literal"><span class="pre">paddle::CUDADeviceContext</span></code>.</p>
</div>
<div class="section" id="multi-block-code-generation">
<span id="multi-block-code-generation"></span><h2>Multi-Block Code Generation<a class="headerlink" href="#multi-block-code-generation" title="Permalink to this headline"></a></h2>
<p>Most Fluid application programs may have more than one blocks. To
execute them, we need to trace <a class="reference internal" href="scope.html"><span class="doc">scopes</span></a>.</p>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../_static/jquery.js"></script>
<script type="text/javascript" src="../_static/underscore.js"></script>
<script type="text/javascript" src="../_static/doctools.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -105,18 +105,10 @@ There are two ways to execute a Fluid program. When a program is executed, it c
There is a C++ class [`Executor`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h), which runs a `ProgramDesc`, similar to how an interpreter runs a Python program.
Fluid is moving towards the direction of a compiler, which is explain in more detail later in this article.
Fluid is moving towards the direction of a compiler, which is explain in [fluid_compiler.md](fluid_compiler.md).
## Backward Compatibility of Fluid
Given all the advantages from the removal of the concept of a *model*, hardware manufacturers might still prefer the existence of the concept of a model, so it would be easier for them to support multiple frameworks all at once and could run a trained model during inference. For example, Nervana, a startup company acquired by Intel, has been working on an XPU that reads the models in the format known as [n-graph](https://github.com/NervanaSystems/ngraph). Similarly, [Movidius](https://www.movidius.com/) is producing a mobile deep learning chip that reads and runs graphs of operators. The well-known [ONNX](https://github.com/onnx/onnx) is also a file format of graphs of operators.
For Fluid, we can write a converter that extracts the parts in the `ProgramDesc` protobuf message, converts them into a graph of operators, and exports the graph into the ONNX or n-graph format.
## Towards a Deep Learning Language and the Compiler
We can change the `if-then-else` and loop structure a little bit in the above Fluid example programs, to make it into a new programming language, different than Python.
Even if we do not invent a new language, as long as we get the `ProgramDesc` message filled in, we can write a transpiler, which translates each invocation to an operator, into a C++ call to a kernel function of that operator. For example, a transpiler that weaves the CUDA kernels outputs an NVIDIA-friendly C++ program, which can be built using `nvcc`. Another transpiler could generate MKL-friendly code that should be built using `icc` from Intel. More interestingly, we can translate a Fluid program into its distributed version of two `ProgramDesc` messages, one for running on the trainer process, and the other one for the parameter server. For more details of the last example, the [concurrent programming design](concurrent_programming.md) document would be a good pointer. The following figure explains the proposed two-stage process:
![](fluid-compiler.png)
# PaddlePaddle Fluid: Towards a Compiled Programming Language
As described in [fluid.md](fluid.md), when a Fluid application program
runs, it generates a `ProgramDesc` protobuf message as an intermediate
representation of itself. The C++ class `Executor` can run this
protobuf message as an interpreter. This article describes the Fluid
compiler.
![](fluid-compiler.png)
## ProgramDesc
Before we go deeper into the idea of compiled language, let us take a
look at a simple example Fluid application.
```python
import "fluid"
func paddlepaddle() {
X = fluid.read(...)
W = fluid.Tensor(...)
Y = fluid.mult(X, W)
}
```
This program consists of a [block](block.md) of three operators --
`read`, `assign`, and `mult`. Its `ProgramDesc` message looks like
the following
```protobuf
message ProgramDesc {
block[0] = Block {
vars = [X, W, Y],
ops = [
read(output = X)
assign(input = ..., output = W)
mult(input = {X, W}, output = Y)
],
}
}
```
## Transpilers
We can write a transpiler program that takes a `ProgramDesc`, e.g.,
the above one, and outputs another `ProgramDesc`. Let us take some
examples:
1. *Memory optimization transpiler*: We can write a transpiler that
inserts some `FreeMemoryOp`s in the above example `ProgramDesc` so
to free memory early, before the end of an iteration, so to keep a
small memory footprint.
1. *Distributed training transpiler*: We can write a transpiler that
converts a`ProgramDesc` into its distributed version of two
`ProgramDesc`s -- one for running by the trainer processes and the
other for the parameter server.
In the rest of this article, we talk about a special kind of
transpiler, *Native code generator*, which takes a `ProgramDesc` and
generates a `.cu` (or `.cc`) file, which could be built by C++
compilers (gcc, nvcc, icc) into binaries.
## Native Code Generator
For the above example, the native code generator transpiler, say, the
CUDA code generator, should generate a `main` function:
```c++
void main() {
auto X = fluid_cuda_read(...);
auto W = fluid_cuda_create_tensor(...);
auto Y = fluid_cuda_mult(X, W);
}
```
and the definitions of functions `fluid_cuda_read`,
`fluid_cuda_create_tensor`, and `fluid_cuda_mult`. Please be aware
that each function could just define a C++ instance of an operator and
run it. For example
```c++
paddle::Tensor fluid_cuda_read(...) {
paddle::Tensor t;
paddle::operator::Read r(&t, ...);
r.Run();
return t;
}
```
For computational operators that have multiple *kernels*, each for a
specific hardware platform, for example, the `mult` operator, the
generated code should call its CUDA kernel:
```c++
paddle::Tensor fluid_cuda_mult(const paddle::Tensor& a,
const paddle::Tensor& b) {
paddle::Tensor t;
paddle::operator::Mult m(a, b, ...);
Mult.Run(cuda_context);
}
```
where `cuda_context` could be a global variable of type
`paddle::CUDADeviceContext`.
## Multi-Block Code Generation
Most Fluid application programs may have more than one blocks. To
execute them, we need to trace [scopes](scope.md).
......@@ -318,19 +318,13 @@
<span id="the-execution-of-a-fluid-program"></span><h2>The Execution of a Fluid Program<a class="headerlink" href="#the-execution-of-a-fluid-program" title="永久链接至标题"></a></h2>
<p>There are two ways to execute a Fluid program. When a program is executed, it creates a protobuf message <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/a91efdde6910ce92a78e3aa7157412c4c88d9ee8/paddle/framework/framework.proto#L145"><code class="docutils literal"><span class="pre">ProgramDesc</span></code></a> that describes the process and is conceptually like an <a class="reference external" href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">abstract syntax tree</a>.</p>
<p>There is a C++ class <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h"><code class="docutils literal"><span class="pre">Executor</span></code></a>, which runs a <code class="docutils literal"><span class="pre">ProgramDesc</span></code>, similar to how an interpreter runs a Python program.</p>
<p>Fluid is moving towards the direction of a compiler, which is explain in more detail later in this article.</p>
<p>Fluid is moving towards the direction of a compiler, which is explain in <a class="reference internal" href="fluid_compiler.html"><span class="doc">fluid</span></a>.</p>
</div>
<div class="section" id="backward-compatibility-of-fluid">
<span id="backward-compatibility-of-fluid"></span><h2>Backward Compatibility of Fluid<a class="headerlink" href="#backward-compatibility-of-fluid" title="永久链接至标题"></a></h2>
<p>Given all the advantages from the removal of the concept of a <em>model</em>, hardware manufacturers might still prefer the existence of the concept of a model, so it would be easier for them to support multiple frameworks all at once and could run a trained model during inference. For example, Nervana, a startup company acquired by Intel, has been working on an XPU that reads the models in the format known as <a class="reference external" href="https://github.com/NervanaSystems/ngraph">n-graph</a>. Similarly, <a class="reference external" href="https://www.movidius.com/">Movidius</a> is producing a mobile deep learning chip that reads and runs graphs of operators. The well-known <a class="reference external" href="https://github.com/onnx/onnx">ONNX</a> is also a file format of graphs of operators.</p>
<p>For Fluid, we can write a converter that extracts the parts in the <code class="docutils literal"><span class="pre">ProgramDesc</span></code> protobuf message, converts them into a graph of operators, and exports the graph into the ONNX or n-graph format.</p>
</div>
<div class="section" id="towards-a-deep-learning-language-and-the-compiler">
<span id="towards-a-deep-learning-language-and-the-compiler"></span><h2>Towards a Deep Learning Language and the Compiler<a class="headerlink" href="#towards-a-deep-learning-language-and-the-compiler" title="永久链接至标题"></a></h2>
<p>We can change the <code class="docutils literal"><span class="pre">if-then-else</span></code> and loop structure a little bit in the above Fluid example programs, to make it into a new programming language, different than Python.</p>
<p>Even if we do not invent a new language, as long as we get the <code class="docutils literal"><span class="pre">ProgramDesc</span></code> message filled in, we can write a transpiler, which translates each invocation to an operator, into a C++ call to a kernel function of that operator. For example, a transpiler that weaves the CUDA kernels outputs an NVIDIA-friendly C++ program, which can be built using <code class="docutils literal"><span class="pre">nvcc</span></code>. Another transpiler could generate MKL-friendly code that should be built using <code class="docutils literal"><span class="pre">icc</span></code> from Intel. More interestingly, we can translate a Fluid program into its distributed version of two <code class="docutils literal"><span class="pre">ProgramDesc</span></code> messages, one for running on the trainer process, and the other one for the parameter server. For more details of the last example, the <a class="reference internal" href="concurrent_programming.html"><span class="doc">concurrent programming design</span></a> document would be a good pointer. The following figure explains the proposed two-stage process:</p>
<p><img alt="" src="../_images/fluid-compiler.png" /></p>
</div>
</div>
......
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>PaddlePaddle Fluid: Towards a Compiled Programming Language &mdash; PaddlePaddle 文档</title>
<link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
<link rel="index" title="索引"
href="../genindex.html"/>
<link rel="search" title="搜索" href="../search.html"/>
<link rel="top" title="PaddlePaddle 文档" href="../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Fork me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../getstarted/index_cn.html">新手入门</a></li>
<li class="toctree-l1"><a class="reference internal" href="../howto/index_cn.html">进阶指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../api/index_cn.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../faq/index_cn.html">FAQ</a></li>
<li class="toctree-l1"><a class="reference internal" href="../mobile/index_cn.html">MOBILE</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../getstarted/index_cn.html">新手入门</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../getstarted/build_and_install/index_cn.html">安装与编译</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../getstarted/build_and_install/pip_install_cn.html">使用pip安装</a></li>
<li class="toctree-l3"><a class="reference internal" href="../getstarted/build_and_install/docker_install_cn.html">使用Docker安装运行</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/dev/build_cn.html">用Docker编译和测试PaddlePaddle</a></li>
<li class="toctree-l3"><a class="reference internal" href="../getstarted/build_and_install/build_from_source_cn.html">从源码编译</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../getstarted/concepts/use_concepts_cn.html">基本使用概念</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../howto/index_cn.html">进阶指南</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../howto/usage/cmd_parameter/index_cn.html">设置命令行参数</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cmd_parameter/use_case_cn.html">使用案例</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cmd_parameter/arguments_cn.html">参数概述</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cmd_parameter/detail_introduction_cn.html">细节描述</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../howto/usage/cluster/cluster_train_cn.html">分布式训练</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/fabric_cn.html">fabric集群</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/openmpi_cn.html">openmpi集群</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/k8s_cn.html">kubernetes单机</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/k8s_distributed_cn.html">kubernetes distributed分布式</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/cluster/k8s_aws_cn.html">AWS上运行kubernetes集群训练</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../howto/usage/capi/index_cn.html">PaddlePaddle C-API</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/capi/compile_paddle_lib_cn.html">编译 PaddlePaddle 预测库</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/capi/organization_of_the_inputs_cn.html">输入/输出数据组织</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/usage/capi/workflow_of_capi_cn.html">C-API 使用流程</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../howto/dev/contribute_to_paddle_cn.html">如何贡献代码</a></li>
<li class="toctree-l2"><a class="reference internal" href="../howto/dev/write_docs_cn.html">如何贡献/修改文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../howto/deep_model/rnn/index_cn.html">RNN相关模型</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../howto/deep_model/rnn/rnn_config_cn.html">RNN配置</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/deep_model/rnn/recurrent_group_cn.html">Recurrent Group教程</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/deep_model/rnn/hierarchical_layer_cn.html">支持双层序列作为输入的Layer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../howto/deep_model/rnn/hrnn_rnn_api_compare_cn.html">单双层RNN API对比介绍</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../howto/optimization/gpu_profiling_cn.html">GPU性能分析与调优</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../api/index_cn.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/model_configs.html">模型配置</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/data.html">数据访问</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/data/data_reader.html">Data Reader Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/data/image.html">Image Interface</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/data/dataset.html">Dataset</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/run_logic.html">训练与应用</a></li>
<li class="toctree-l2"><a class="reference internal" href="../api/v2/fluid.html">Fluid</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/layers.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/data_feeder.html">DataFeeder</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/executor.html">Executor</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/initializer.html">Initializer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/evaluator.html">Evaluator</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/nets.html">Nets</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/param_attr.html">ParamAttr</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/profiler.html">Profiler</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/regularizer.html">Regularizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../api/v2/fluid/io.html">IO</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../faq/index_cn.html">FAQ</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../faq/build_and_install/index_cn.html">编译安装与单元测试</a></li>
<li class="toctree-l2"><a class="reference internal" href="../faq/model/index_cn.html">模型配置</a></li>
<li class="toctree-l2"><a class="reference internal" href="../faq/parameter/index_cn.html">参数设置</a></li>
<li class="toctree-l2"><a class="reference internal" href="../faq/local/index_cn.html">本地训练与预测</a></li>
<li class="toctree-l2"><a class="reference internal" href="../faq/cluster/index_cn.html">集群训练与预测</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../mobile/index_cn.html">MOBILE</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../mobile/cross_compiling_for_android_cn.html">Android平台编译指南</a></li>
<li class="toctree-l2"><a class="reference internal" href="../mobile/cross_compiling_for_ios_cn.html">iOS平台编译指南</a></li>
<li class="toctree-l2"><a class="reference internal" href="../mobile/cross_compiling_for_raspberry_cn.html">Raspberry Pi平台编译指南</a></li>
</ul>
</li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>PaddlePaddle Fluid: Towards a Compiled Programming Language</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="paddlepaddle-fluid-towards-a-compiled-programming-language">
<span id="paddlepaddle-fluid-towards-a-compiled-programming-language"></span><h1>PaddlePaddle Fluid: Towards a Compiled Programming Language<a class="headerlink" href="#paddlepaddle-fluid-towards-a-compiled-programming-language" title="永久链接至标题"></a></h1>
<p>As described in <a class="reference internal" href="fluid.html"><span class="doc">fluid.md</span></a>, when a Fluid application program
runs, it generates a <code class="docutils literal"><span class="pre">ProgramDesc</span></code> protobuf message as an intermediate
representation of itself. The C++ class <code class="docutils literal"><span class="pre">Executor</span></code> can run this
protobuf message as an interpreter. This article describes the Fluid
compiler.</p>
<p><img alt="" src="../_images/fluid-compiler.png" /></p>
<div class="section" id="programdesc">
<span id="programdesc"></span><h2>ProgramDesc<a class="headerlink" href="#programdesc" title="永久链接至标题"></a></h2>
<p>Before we go deeper into the idea of compiled language, let us take a
look at a simple example Fluid application.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="s2">&quot;fluid&quot;</span>
<span class="n">func</span> <span class="n">paddlepaddle</span><span class="p">()</span> <span class="p">{</span>
<span class="n">X</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">read</span><span class="p">(</span><span class="o">...</span><span class="p">)</span>
<span class="n">W</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">Tensor</span><span class="p">(</span><span class="o">...</span><span class="p">)</span>
<span class="n">Y</span> <span class="o">=</span> <span class="n">fluid</span><span class="o">.</span><span class="n">mult</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">)</span>
<span class="p">}</span>
</pre></div>
</div>
<p>This program consists of a <a class="reference internal" href="block.html"><span class="doc">block</span></a> of three operators &#8211;
<code class="docutils literal"><span class="pre">read</span></code>, <code class="docutils literal"><span class="pre">assign</span></code>, and <code class="docutils literal"><span class="pre">mult</span></code>. Its <code class="docutils literal"><span class="pre">ProgramDesc</span></code> message looks like
the following</p>
<div class="highlight-protobuf"><div class="highlight"><pre><span></span><span class="kd">message</span> <span class="nc">ProgramDesc</span> <span class="p">{</span>
<span class="n">block</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="n">Block</span> <span class="p">{</span>
<span class="na">vars</span> <span class="o">=</span> <span class="p">[</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">,</span> <span class="n">Y</span><span class="p">],</span>
<span class="na">ops</span> <span class="o">=</span> <span class="p">[</span>
<span class="n">read</span><span class="p">(</span><span class="na">output</span> <span class="o">=</span> <span class="n">X</span><span class="p">)</span>
<span class="n">assign</span><span class="p">(</span><span class="na">input</span> <span class="o">=</span> <span class="o">...</span><span class="p">,</span> <span class="na">output</span> <span class="o">=</span> <span class="n">W</span><span class="p">)</span>
<span class="n">mult</span><span class="p">(</span><span class="na">input</span> <span class="o">=</span> <span class="p">{</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">},</span> <span class="na">output</span> <span class="o">=</span> <span class="n">Y</span><span class="p">)</span>
<span class="p">],</span>
<span class="p">}</span>
<span class="p">}</span>
</pre></div>
</div>
</div>
<div class="section" id="transpilers">
<span id="transpilers"></span><h2>Transpilers<a class="headerlink" href="#transpilers" title="永久链接至标题"></a></h2>
<p>We can write a transpiler program that takes a <code class="docutils literal"><span class="pre">ProgramDesc</span></code>, e.g.,
the above one, and outputs another <code class="docutils literal"><span class="pre">ProgramDesc</span></code>. Let us take some
examples:</p>
<ol class="simple">
<li><em>Memory optimization transpiler</em>: We can write a transpiler that
inserts some <code class="docutils literal"><span class="pre">FreeMemoryOp</span></code>s in the above example <code class="docutils literal"><span class="pre">ProgramDesc</span></code> so
to free memory early, before the end of an iteration, so to keep a
small memory footprint.</li>
<li><em>Distributed training transpiler</em>: We can write a transpiler that
converts a<code class="docutils literal"><span class="pre">ProgramDesc</span></code> into its distributed version of two
<code class="docutils literal"><span class="pre">ProgramDesc</span></code>s &#8211; one for running by the trainer processes and the
other for the parameter server.</li>
</ol>
<p>In the rest of this article, we talk about a special kind of
transpiler, <em>Native code generator</em>, which takes a <code class="docutils literal"><span class="pre">ProgramDesc</span></code> and
generates a <code class="docutils literal"><span class="pre">.cu</span></code> (or <code class="docutils literal"><span class="pre">.cc</span></code>) file, which could be built by C++
compilers (gcc, nvcc, icc) into binaries.</p>
</div>
<div class="section" id="native-code-generator">
<span id="native-code-generator"></span><h2>Native Code Generator<a class="headerlink" href="#native-code-generator" title="永久链接至标题"></a></h2>
<p>For the above example, the native code generator transpiler, say, the
CUDA code generator, should generate a <code class="docutils literal"><span class="pre">main</span></code> function:</p>
<div class="highlight-c++"><div class="highlight"><pre><span></span><span class="kt">void</span> <span class="nf">main</span><span class="p">()</span> <span class="p">{</span>
<span class="k">auto</span> <span class="n">X</span> <span class="o">=</span> <span class="n">fluid_cuda_read</span><span class="p">(...);</span>
<span class="k">auto</span> <span class="n">W</span> <span class="o">=</span> <span class="n">fluid_cuda_create_tensor</span><span class="p">(...);</span>
<span class="k">auto</span> <span class="n">Y</span> <span class="o">=</span> <span class="n">fluid_cuda_mult</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">W</span><span class="p">);</span>
<span class="p">}</span>
</pre></div>
</div>
<p>and the definitions of functions <code class="docutils literal"><span class="pre">fluid_cuda_read</span></code>,
<code class="docutils literal"><span class="pre">fluid_cuda_create_tensor</span></code>, and <code class="docutils literal"><span class="pre">fluid_cuda_mult</span></code>. Please be aware
that each function could just define a C++ instance of an operator and
run it. For example</p>
<div class="highlight-c++"><div class="highlight"><pre><span></span><span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">fluid_cuda_read</span><span class="p">(...)</span> <span class="p">{</span>
<span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">t</span><span class="p">;</span>
<span class="n">paddle</span><span class="o">::</span><span class="k">operator</span><span class="o">::</span><span class="n">Read</span> <span class="n">r</span><span class="p">(</span><span class="o">&amp;</span><span class="n">t</span><span class="p">,</span> <span class="p">...);</span>
<span class="n">r</span><span class="p">.</span><span class="n">Run</span><span class="p">();</span>
<span class="k">return</span> <span class="n">t</span><span class="p">;</span>
<span class="p">}</span>
</pre></div>
</div>
<p>For computational operators that have multiple <em>kernels</em>, each for a
specific hardware platform, for example, the <code class="docutils literal"><span class="pre">mult</span></code> operator, the
generated code should call its CUDA kernel:</p>
<div class="highlight-c++"><div class="highlight"><pre><span></span><span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">fluid_cuda_mult</span><span class="p">(</span><span class="k">const</span> <span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span><span class="o">&amp;</span> <span class="n">a</span><span class="p">,</span>
<span class="k">const</span> <span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span><span class="o">&amp;</span> <span class="n">b</span><span class="p">)</span> <span class="p">{</span>
<span class="n">paddle</span><span class="o">::</span><span class="n">Tensor</span> <span class="n">t</span><span class="p">;</span>
<span class="n">paddle</span><span class="o">::</span><span class="k">operator</span><span class="o">::</span><span class="n">Mult</span> <span class="n">m</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="p">...);</span>
<span class="n">Mult</span><span class="p">.</span><span class="n">Run</span><span class="p">(</span><span class="n">cuda_context</span><span class="p">);</span>
<span class="p">}</span>
</pre></div>
</div>
<p>where <code class="docutils literal"><span class="pre">cuda_context</span></code> could be a global variable of type
<code class="docutils literal"><span class="pre">paddle::CUDADeviceContext</span></code>.</p>
</div>
<div class="section" id="multi-block-code-generation">
<span id="multi-block-code-generation"></span><h2>Multi-Block Code Generation<a class="headerlink" href="#multi-block-code-generation" title="永久链接至标题"></a></h2>
<p>Most Fluid application programs may have more than one blocks. To
execute them, we need to trace <a class="reference internal" href="scope.html"><span class="doc">scopes</span></a>.</p>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../_static/jquery.js"></script>
<script type="text/javascript" src="../_static/underscore.js"></script>
<script type="text/javascript" src="../_static/doctools.js"></script>
<script type="text/javascript" src="../_static/translations.js"></script>
<script type="text/javascript" src="https://cdn.bootcss.com/mathjax/2.7.0/MathJax.js"></script>
<script type="text/javascript" src="../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册