提交 8e2597be 编写于 作者: L LiuYongFeng 提交者: GitHub

Merge branch 'gh-pages' into gh-pages

# Design Doc: Save Model
## Overview
The model is the output of the training process. There are two
ways from which user can obtain a model:
- Save model triggered by user code: user code asks PaddlePaddle to
save a model.
- Convert model from the checkpoint: model being converted from
pservers' periodic checkpoint. In this way, the user can cancel a
job at any time, and still have a relatively fresh model (we
checkpoint around every 5 minutes).
### Trainer Saving Model vs. Pservers Saving Model
Both trainers and pservers have access to the model. So the model can
be saved from a trainer or pservers. We need to decide where the model
is saved from.
#### Dense Update vs. Sparse Update
There are two types of model update methods: dense update and sparse
update (when the model parameter is configured to be sparse).
- Dense update
Every trainer has it's own full copy of the model. Every model
update will update the entire model.
- Sparse update
The training input is sparse, and the trainer does not have the
entire model. It will only download the sub-model necessary related
to the input. When updating the model, only the sub-model related to
the training input is updated.
#### Pservers Saving Model
The benefit of letting pservers save model is they have the entire
model all the time. However, since pservers are on different nodes, it
requires a merging process to merge model shards into the same
model. Thus requires the pservers to write models to a distributed
filesystem, making the checkpoint shards visible to the merge program.
#### Trainer Saving Model
The benefit of letting one trainer to save the model is it does not
require a distributed filesystem. And it's reusing the same save model
logic when training locally - except when doing sparse update, the
trainer needs to download the entire model during the saving process.
#### Conclusion
Given trainer saving model does not require a distributed filesystem,
and is an intuitive extension to trainer saving model when training
locally, we decide to let the trainer save the model when doing
distributed training.
### Convert Model from Checkpoint
TODO
## Timeline
We first implement trainer save the model. Converting the latest
snapshot to a model will be a TODO for future.
## Trainer Save Model
### Trainer Election
One trainer will be elected as the one to save the model. When using
etcd, trainer ID is a randomly generated UUID, we will utilize etcd to
elect one trainer. When not using etcd, unique trainer IDs will be
given by the administrator, the trainer whose ID is "0" is elected to
save the model.
### Model Save Path
Each trainer will be given the directory to save the model. The
elected trainer will save the model to
`given-directory/trainerID`. Since the trainer ID is unique, this
would prevent concurrent save to the same file when multiple trainers
are elected to save the model when split-brain problem happens.
### What Happens When Model Is Saving
It takes some time to save model, we need to define what will happen
when save model is taking place.
When doing dense update, the trainer uses the local model. Pservers
does not need to pause model update.
When doing sparse update. The trainer needs to download the entire
model while saving. To get the most accurate model, the model update
needs to be paused before the download starts and resumed after the
download finishes. Otherwise, the trainer gets a model that is
"polluted": some part of the model is old, some part of the model is
new.
It's unclear that the "polluted" model will be inferior due to the
stochastic nature of deep learning, and pausing the model update will
add more complexity to the system. Since supporting sparse update is a
TODO item. We defer the evaluation of pause the model update or not
during saving model to the future.
## Interaction between C++ and Python
Users employ API in Python to describe their own network, however, the network construction actually happens in C++. so Protobuf is introduced to send the message between Python and C++.
The Interaction between Python and C++ can be simplified as two steps:
1. C++ tells Python how many Ops there are, and what parameter do users need to offer to initialize a new Op. Python then builds API for each Op at compile time.
2. Users invoke APIs built by Python and provide necessary parameters. These parameters will be sent to C++ fo finish Op construction task.
### Message form C++ to Python
We define a Protobuf message class `OpProto` to hold message needed in the first step. What should an `OpProto` contain? This question is equivalent to “What message do we need to offer, to build a Python API which is legal and user oriented and can use to describe a whole Op.”
Following message are necessary:
1. Op's name, and its simple comment.
2. Input and output variable number; each variable's name, type, and comment.
3. Op's attributes; each attribute includes name, type, comment, **default value** and **value range**.
So `OpProto` can be defined as follows:
```proto
enum AttrType {
INT = 1;
FLOAT = 2;
STRING = 3;
INTS = 4;
FLOATS = 5;
STRINGS = 6;
};
message AttrValue {
AttrType type = 1;
optional int iv = 2;
optional float fv = 3;
optional string sv = 4;
repeated int ivs = 5;
repeated float fvs = 6;
repeated string svs = 7;
};
message AttrProto {
required string name = 1;
required string comment = 2;
required AttrType type = 3;
};
message VarProto {
required string name = 1;
required string comment = 2;
};
message OpProto {
repeated VarProto inputs = 1;
repeated VarProto outputs = 2;
repeated AttrProto attrs = 3;
required string type = 4;
required string comment = 5;
};
```
To generate Python code automatically:
```python
def create_python_ops_creatation_functions():
op_protos = paddle.framework.OpRegistry.get_all_op_proto()
for type_name in op_protos:
op_proto = op_protos[type_name]
def __impl__(**kwargs): # User must use key word args in Paddle API
inputs = [kwargs.get(ipt.name, "") for ipt in op_proto.inputs]
outputs = [kwargs.get(opt.name, "") for opt in op_proto.outputs]
attrs = [cast_to_op_attr(attr, kwargs.get(attr.name, None)) for attr in op_proto.attrs]
opdesc = (input, outputs, type_name, attrs)
return paddle.framework.OpRegistry.CreateOp(opdesc)
__impl__.__doc__ = create_doc_string(op_proto)
globals()[type_name] = __impl__
create_python_ops_creatation_functions()
```
### Message from Python to C++
To hold message needed in the above second step, we define Protobuf message class `OpDesc`. It is used to hold user-specified parameters in Op describing.
```proto
message OpDesc {
required string type = 1;
repeated string inputs = 2;
repeated string outputs = 3;
map<string, AttrValue> attrs = 4;
};
```
## OpProto Register
Every Op has its own `OpProto`. For using convenience, we need to register them and record all their messages. For each `Op` class, we define a corresponding `OpMaker` class, in whose constructor we implement the `OpProto`'s building process. `OpMaker`'s constructor will be invoked by another function `OpRegistry::RegisterOp()`.
```cpp
class OpProtoMaker {
public:
OpProtoMaker(OpProto* proto): proto_(proto) {}
protected:
OpProto* proto_;
void AddInput(const std::string& name, const std::string& desc) {...}
void AddAttr(const std::string& name, const std::string& desc, TypeId type) {...}
void AddComment(const std::string& comment) { ... }
};
class OpRegistry {
public:
using OpCreator = std::function<OperatorBase* (OpDesc& desc)>;
template <typename OpType, typename OpMaker>
static void RegisterOp(const std::string& name) {
gCreators_[name] = [](const OpDesc& desc) {
return new OpType(desc);
};
OpProto& opProto = gProtos_[name];
OpMaker()(&opProto);
}
static map<string, OpCreator> gCreators_;
static map<string, OpProto> gProtos_;
};
template <typename OpType, typename OpMaker>
class OpRegister {
public:
OpRegister(std::string type) {
OpRegistry::RegisterOp<OpType, OpMaker>(type);
}
};
#define REGISTER_OP(op_class, op_maker_class, type_name) \
class op_class##Register { \
private: \
const static OpRegister<#op_class, #op_maker_class> reg; \
}; \
const Register op_class##Register::reg(#type_name);
class CosineOp {
// ...
}
struct CosineOpProtoMaker : public OpProtoMaker {
CosineOpProtoMaker(OpProto* proto) : OpProtoMaker(proto) {
AddInput("input", "input of cosine op");
AddAttr("scale", "scale of cosine op", float).Default(1.0).LargerThan(0.0);
AddType("cos");
AddComment("This is cos op");
}
}
REGISTER_OP(CosineOp, CosineOpProtoMaker, cos);
```
In `REGISTER_OP(CosineOp, CosineOpProtoMaker, cos)`, we register not only `CosineOp` but also `CosineOpProto`. As fields of `CosineOpProto`, the default value and value range of `scale` are also registered here.
## Python API
Python APIs are divided into two types, high-level API and low-level API.
### High-Level API
High-level API is called by users directly, so it should keep its style consistent with existing V2 APIs.
Here is a sample about how a define a fc layer:
```python
hd = fc_layer(input=data, size=56, with_bias=True, activation="sigmoid");
```
`hd` is the output of `fc_layer` and it's a `variable`. It can be further sent into other layers as input.
The definition of `fc_layer()`:
```python
def fc_layer(input, size, with_bias, activation):
attr_map = {"size":size}
check_attrs(attr_map)
w = make_variable('w')
if with_bias:
b = make_variable('b')
else:
b = None
fc_output = make_variable('fc_output');
fc_op(input, w, b, fc_output, attr_map)
act_output = make_variable('sigmod_output');
if activation == "sigmod":
sigmod_op(fc_output, act_output);
elif:
# ...
return act_output;
```
### Low Leval API
In above sample, `fc_op` and `sigmod_op` are low-level API. They build `OpDesc` and invoke corresponding C++ code.
*TODO*
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Design Doc: Save Model &mdash; PaddlePaddle documentation</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="Index"
href="../../genindex.html"/>
<link rel="search" title="Search" href="../../search.html"/>
<link rel="top" title="PaddlePaddle documentation" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Folk me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../about/index_en.html">ABOUT</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_en.html">GET STARTED</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_en.html">Install and Build</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_en.html">PaddlePaddle in Docker Containers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/ubuntu_install_en.html">Debian Package installation guide</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/build_from_source_en.html">Installing from Sources</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_en.html">HOW TO</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_en.html">Set Command-line Parameters</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_en.html">Use Case</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_en.html">Argument Outline</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_en.html">Detail Description</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_en.html">Run Distributed Training</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_en.html">Paddle On Kubernetes</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_aws_en.html">Distributed PaddlePaddle Training on AWS with Kubernetes</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/new_layer_en.html">Write New Layers</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_en.html">Contribute Code</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_en.html">RNN Models</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/rnn_config_en.html">RNN Configuration</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_en.html">Tune GPU Performance</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_en.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">Model Configuration</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">Data Reader Interface and DataSets</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">Training and Inference</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../about/index_en.html">ABOUT</a></li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>Design Doc: Save Model</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="design-doc-save-model">
<span id="design-doc-save-model"></span><h1>Design Doc: Save Model<a class="headerlink" href="#design-doc-save-model" title="Permalink to this headline"></a></h1>
<div class="section" id="overview">
<span id="overview"></span><h2>Overview<a class="headerlink" href="#overview" title="Permalink to this headline"></a></h2>
<p>The model is the output of the training process. There are two
ways from which user can obtain a model:</p>
<ul class="simple">
<li>Save model triggered by user code: user code asks PaddlePaddle to
save a model.</li>
<li>Convert model from the checkpoint: model being converted from
pservers&#8217; periodic checkpoint. In this way, the user can cancel a
job at any time, and still have a relatively fresh model (we
checkpoint around every 5 minutes).</li>
</ul>
<div class="section" id="trainer-saving-model-vs-pservers-saving-model">
<span id="trainer-saving-model-vs-pservers-saving-model"></span><h3>Trainer Saving Model vs. Pservers Saving Model<a class="headerlink" href="#trainer-saving-model-vs-pservers-saving-model" title="Permalink to this headline"></a></h3>
<p>Both trainers and pservers have access to the model. So the model can
be saved from a trainer or pservers. We need to decide where the model
is saved from.</p>
<div class="section" id="dense-update-vs-sparse-update">
<span id="dense-update-vs-sparse-update"></span><h4>Dense Update vs. Sparse Update<a class="headerlink" href="#dense-update-vs-sparse-update" title="Permalink to this headline"></a></h4>
<p>There are two types of model update methods: dense update and sparse
update (when the model parameter is configured to be sparse).</p>
<ul>
<li><p class="first">Dense update</p>
<p>Every trainer has it&#8217;s own full copy of the model. Every model
update will update the entire model.</p>
</li>
<li><p class="first">Sparse update</p>
<p>The training input is sparse, and the trainer does not have the
entire model. It will only download the sub-model necessary related
to the input. When updating the model, only the sub-model related to
the training input is updated.</p>
</li>
</ul>
</div>
<div class="section" id="pservers-saving-model">
<span id="pservers-saving-model"></span><h4>Pservers Saving Model<a class="headerlink" href="#pservers-saving-model" title="Permalink to this headline"></a></h4>
<p>The benefit of letting pservers save model is they have the entire
model all the time. However, since pservers are on different nodes, it
requires a merging process to merge model shards into the same
model. Thus requires the pservers to write models to a distributed
filesystem, making the checkpoint shards visible to the merge program.</p>
</div>
<div class="section" id="trainer-saving-model">
<span id="trainer-saving-model"></span><h4>Trainer Saving Model<a class="headerlink" href="#trainer-saving-model" title="Permalink to this headline"></a></h4>
<p>The benefit of letting one trainer to save the model is it does not
require a distributed filesystem. And it&#8217;s reusing the same save model
logic when training locally - except when doing sparse update, the
trainer needs to download the entire model during the saving process.</p>
</div>
<div class="section" id="conclusion">
<span id="conclusion"></span><h4>Conclusion<a class="headerlink" href="#conclusion" title="Permalink to this headline"></a></h4>
<p>Given trainer saving model does not require a distributed filesystem,
and is an intuitive extension to trainer saving model when training
locally, we decide to let the trainer save the model when doing
distributed training.</p>
</div>
</div>
<div class="section" id="convert-model-from-checkpoint">
<span id="convert-model-from-checkpoint"></span><h3>Convert Model from Checkpoint<a class="headerlink" href="#convert-model-from-checkpoint" title="Permalink to this headline"></a></h3>
<p>TODO</p>
</div>
</div>
<div class="section" id="timeline">
<span id="timeline"></span><h2>Timeline<a class="headerlink" href="#timeline" title="Permalink to this headline"></a></h2>
<p>We first implement trainer save the model. Converting the latest
snapshot to a model will be a TODO for future.</p>
</div>
<div class="section" id="trainer-save-model">
<span id="trainer-save-model"></span><h2>Trainer Save Model<a class="headerlink" href="#trainer-save-model" title="Permalink to this headline"></a></h2>
<div class="section" id="trainer-election">
<span id="trainer-election"></span><h3>Trainer Election<a class="headerlink" href="#trainer-election" title="Permalink to this headline"></a></h3>
<p>One trainer will be elected as the one to save the model. When using
etcd, trainer ID is a randomly generated UUID, we will utilize etcd to
elect one trainer. When not using etcd, unique trainer IDs will be
given by the administrator, the trainer whose ID is &#8220;0&#8221; is elected to
save the model.</p>
</div>
<div class="section" id="model-save-path">
<span id="model-save-path"></span><h3>Model Save Path<a class="headerlink" href="#model-save-path" title="Permalink to this headline"></a></h3>
<p>Each trainer will be given the directory to save the model. The
elected trainer will save the model to
<code class="docutils literal"><span class="pre">given-directory/trainerID</span></code>. Since the trainer ID is unique, this
would prevent concurrent save to the same file when multiple trainers
are elected to save the model when split-brain problem happens.</p>
</div>
<div class="section" id="what-happens-when-model-is-saving">
<span id="what-happens-when-model-is-saving"></span><h3>What Happens When Model Is Saving<a class="headerlink" href="#what-happens-when-model-is-saving" title="Permalink to this headline"></a></h3>
<p>It takes some time to save model, we need to define what will happen
when save model is taking place.</p>
<p>When doing dense update, the trainer uses the local model. Pservers
does not need to pause model update.</p>
<p>When doing sparse update. The trainer needs to download the entire
model while saving. To get the most accurate model, the model update
needs to be paused before the download starts and resumed after the
download finishes. Otherwise, the trainer gets a model that is
&#8220;polluted&#8221;: some part of the model is old, some part of the model is
new.</p>
<p>It&#8217;s unclear that the &#8220;polluted&#8221; model will be inferior due to the
stochastic nature of deep learning, and pausing the model update will
add more complexity to the system. Since supporting sparse update is a
TODO item. We defer the evaluation of pause the model update or not
during saving model to the future.</p>
</div>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
此差异已折叠。
因为 它太大了无法显示 source diff 。你可以改为 查看blob
# Design Doc: Save Model
## Overview
The model is the output of the training process. There are two
ways from which user can obtain a model:
- Save model triggered by user code: user code asks PaddlePaddle to
save a model.
- Convert model from the checkpoint: model being converted from
pservers' periodic checkpoint. In this way, the user can cancel a
job at any time, and still have a relatively fresh model (we
checkpoint around every 5 minutes).
### Trainer Saving Model vs. Pservers Saving Model
Both trainers and pservers have access to the model. So the model can
be saved from a trainer or pservers. We need to decide where the model
is saved from.
#### Dense Update vs. Sparse Update
There are two types of model update methods: dense update and sparse
update (when the model parameter is configured to be sparse).
- Dense update
Every trainer has it's own full copy of the model. Every model
update will update the entire model.
- Sparse update
The training input is sparse, and the trainer does not have the
entire model. It will only download the sub-model necessary related
to the input. When updating the model, only the sub-model related to
the training input is updated.
#### Pservers Saving Model
The benefit of letting pservers save model is they have the entire
model all the time. However, since pservers are on different nodes, it
requires a merging process to merge model shards into the same
model. Thus requires the pservers to write models to a distributed
filesystem, making the checkpoint shards visible to the merge program.
#### Trainer Saving Model
The benefit of letting one trainer to save the model is it does not
require a distributed filesystem. And it's reusing the same save model
logic when training locally - except when doing sparse update, the
trainer needs to download the entire model during the saving process.
#### Conclusion
Given trainer saving model does not require a distributed filesystem,
and is an intuitive extension to trainer saving model when training
locally, we decide to let the trainer save the model when doing
distributed training.
### Convert Model from Checkpoint
TODO
## Timeline
We first implement trainer save the model. Converting the latest
snapshot to a model will be a TODO for future.
## Trainer Save Model
### Trainer Election
One trainer will be elected as the one to save the model. When using
etcd, trainer ID is a randomly generated UUID, we will utilize etcd to
elect one trainer. When not using etcd, unique trainer IDs will be
given by the administrator, the trainer whose ID is "0" is elected to
save the model.
### Model Save Path
Each trainer will be given the directory to save the model. The
elected trainer will save the model to
`given-directory/trainerID`. Since the trainer ID is unique, this
would prevent concurrent save to the same file when multiple trainers
are elected to save the model when split-brain problem happens.
### What Happens When Model Is Saving
It takes some time to save model, we need to define what will happen
when save model is taking place.
When doing dense update, the trainer uses the local model. Pservers
does not need to pause model update.
When doing sparse update. The trainer needs to download the entire
model while saving. To get the most accurate model, the model update
needs to be paused before the download starts and resumed after the
download finishes. Otherwise, the trainer gets a model that is
"polluted": some part of the model is old, some part of the model is
new.
It's unclear that the "polluted" model will be inferior due to the
stochastic nature of deep learning, and pausing the model update will
add more complexity to the system. Since supporting sparse update is a
TODO item. We defer the evaluation of pause the model update or not
during saving model to the future.
## Interaction between C++ and Python
Users employ API in Python to describe their own network, however, the network construction actually happens in C++. so Protobuf is introduced to send the message between Python and C++.
The Interaction between Python and C++ can be simplified as two steps:
1. C++ tells Python how many Ops there are, and what parameter do users need to offer to initialize a new Op. Python then builds API for each Op at compile time.
2. Users invoke APIs built by Python and provide necessary parameters. These parameters will be sent to C++ fo finish Op construction task.
### Message form C++ to Python
We define a Protobuf message class `OpProto` to hold message needed in the first step. What should an `OpProto` contain? This question is equivalent to “What message do we need to offer, to build a Python API which is legal and user oriented and can use to describe a whole Op.”
Following message are necessary:
1. Op's name, and its simple comment.
2. Input and output variable number; each variable's name, type, and comment.
3. Op's attributes; each attribute includes name, type, comment, **default value** and **value range**.
So `OpProto` can be defined as follows:
```proto
enum AttrType {
INT = 1;
FLOAT = 2;
STRING = 3;
INTS = 4;
FLOATS = 5;
STRINGS = 6;
};
message AttrValue {
AttrType type = 1;
optional int iv = 2;
optional float fv = 3;
optional string sv = 4;
repeated int ivs = 5;
repeated float fvs = 6;
repeated string svs = 7;
};
message AttrProto {
required string name = 1;
required string comment = 2;
required AttrType type = 3;
};
message VarProto {
required string name = 1;
required string comment = 2;
};
message OpProto {
repeated VarProto inputs = 1;
repeated VarProto outputs = 2;
repeated AttrProto attrs = 3;
required string type = 4;
required string comment = 5;
};
```
To generate Python code automatically:
```python
def create_python_ops_creatation_functions():
op_protos = paddle.framework.OpRegistry.get_all_op_proto()
for type_name in op_protos:
op_proto = op_protos[type_name]
def __impl__(**kwargs): # User must use key word args in Paddle API
inputs = [kwargs.get(ipt.name, "") for ipt in op_proto.inputs]
outputs = [kwargs.get(opt.name, "") for opt in op_proto.outputs]
attrs = [cast_to_op_attr(attr, kwargs.get(attr.name, None)) for attr in op_proto.attrs]
opdesc = (input, outputs, type_name, attrs)
return paddle.framework.OpRegistry.CreateOp(opdesc)
__impl__.__doc__ = create_doc_string(op_proto)
globals()[type_name] = __impl__
create_python_ops_creatation_functions()
```
### Message from Python to C++
To hold message needed in the above second step, we define Protobuf message class `OpDesc`. It is used to hold user-specified parameters in Op describing.
```proto
message OpDesc {
required string type = 1;
repeated string inputs = 2;
repeated string outputs = 3;
map<string, AttrValue> attrs = 4;
};
```
## OpProto Register
Every Op has its own `OpProto`. For using convenience, we need to register them and record all their messages. For each `Op` class, we define a corresponding `OpMaker` class, in whose constructor we implement the `OpProto`'s building process. `OpMaker`'s constructor will be invoked by another function `OpRegistry::RegisterOp()`.
```cpp
class OpProtoMaker {
public:
OpProtoMaker(OpProto* proto): proto_(proto) {}
protected:
OpProto* proto_;
void AddInput(const std::string& name, const std::string& desc) {...}
void AddAttr(const std::string& name, const std::string& desc, TypeId type) {...}
void AddComment(const std::string& comment) { ... }
};
class OpRegistry {
public:
using OpCreator = std::function<OperatorBase* (OpDesc& desc)>;
template <typename OpType, typename OpMaker>
static void RegisterOp(const std::string& name) {
gCreators_[name] = [](const OpDesc& desc) {
return new OpType(desc);
};
OpProto& opProto = gProtos_[name];
OpMaker()(&opProto);
}
static map<string, OpCreator> gCreators_;
static map<string, OpProto> gProtos_;
};
template <typename OpType, typename OpMaker>
class OpRegister {
public:
OpRegister(std::string type) {
OpRegistry::RegisterOp<OpType, OpMaker>(type);
}
};
#define REGISTER_OP(op_class, op_maker_class, type_name) \
class op_class##Register { \
private: \
const static OpRegister<#op_class, #op_maker_class> reg; \
}; \
const Register op_class##Register::reg(#type_name);
class CosineOp {
// ...
}
struct CosineOpProtoMaker : public OpProtoMaker {
CosineOpProtoMaker(OpProto* proto) : OpProtoMaker(proto) {
AddInput("input", "input of cosine op");
AddAttr("scale", "scale of cosine op", float).Default(1.0).LargerThan(0.0);
AddType("cos");
AddComment("This is cos op");
}
}
REGISTER_OP(CosineOp, CosineOpProtoMaker, cos);
```
In `REGISTER_OP(CosineOp, CosineOpProtoMaker, cos)`, we register not only `CosineOp` but also `CosineOpProto`. As fields of `CosineOpProto`, the default value and value range of `scale` are also registered here.
## Python API
Python APIs are divided into two types, high-level API and low-level API.
### High-Level API
High-level API is called by users directly, so it should keep its style consistent with existing V2 APIs.
Here is a sample about how a define a fc layer:
```python
hd = fc_layer(input=data, size=56, with_bias=True, activation="sigmoid");
```
`hd` is the output of `fc_layer` and it's a `variable`. It can be further sent into other layers as input.
The definition of `fc_layer()`:
```python
def fc_layer(input, size, with_bias, activation):
attr_map = {"size":size}
check_attrs(attr_map)
w = make_variable('w')
if with_bias:
b = make_variable('b')
else:
b = None
fc_output = make_variable('fc_output');
fc_op(input, w, b, fc_output, attr_map)
act_output = make_variable('sigmod_output');
if activation == "sigmod":
sigmod_op(fc_output, act_output);
elif:
# ...
return act_output;
```
### Low Leval API
In above sample, `fc_op` and `sigmod_op` are low-level API. They build `OpDesc` and invoke corresponding C++ code.
*TODO*
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Design Doc: Save Model &mdash; PaddlePaddle 文档</title>
<link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
<link rel="index" title="索引"
href="../../genindex.html"/>
<link rel="search" title="搜索" href="../../search.html"/>
<link rel="top" title="PaddlePaddle 文档" href="../../index.html"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/css/override.css" type="text/css" />
<script>
var _hmt = _hmt || [];
(function() {
var hm = document.createElement("script");
hm.src = "//hm.baidu.com/hm.js?b9a314ab40d04d805655aab1deee08ba";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(hm, s);
})();
</script>
<script src="../../_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<header class="site-header">
<div class="site-logo">
<a href="/"><img src="../../_static/images/PP_w.png"></a>
</div>
<div class="site-nav-links">
<div class="site-menu">
<a class="fork-on-github" href="https://github.com/PaddlePaddle/Paddle" target="_blank"><i class="fa fa-github"></i>Folk me on Github</a>
<div class="language-switcher dropdown">
<a type="button" data-toggle="dropdown">
<span>English</span>
<i class="fa fa-angle-up"></i>
<i class="fa fa-angle-down"></i>
</a>
<ul class="dropdown-menu">
<li><a href="/doc_cn">中文</a></li>
<li><a href="/doc">English</a></li>
</ul>
</div>
<ul class="site-page-links">
<li><a href="/">Home</a></li>
</ul>
</div>
<div class="doc-module">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a></li>
</ul>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
</header>
<div class="main-content-wrap">
<nav class="doc-menu-vertical" role="navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../getstarted/index_cn.html">新手入门</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/build_and_install/index_cn.html">安装与编译</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/docker_install_cn.html">PaddlePaddle的Docker容器使用方式</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/ubuntu_install_cn.html">Ubuntu部署PaddlePaddle</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../getstarted/build_and_install/cmake/build_from_source_cn.html">PaddlePaddle的编译选项</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../getstarted/concepts/use_concepts_cn.html">基本使用概念</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../howto/index_cn.html">进阶指南</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cmd_parameter/index_cn.html">设置命令行参数</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/use_case_cn.html">使用案例</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/arguments_cn.html">参数概述</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/usage/cmd_parameter/detail_introduction_cn.html">细节描述</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/cluster/cluster_train_cn.html">运行分布式训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_basis_cn.html">Kubernetes 简介</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_cn.html">Kubernetes单机训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/usage/k8s/k8s_distributed_cn.html">Kubernetes分布式训练</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/write_docs_cn.html">如何贡献/修改文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/dev/contribute_to_paddle_cn.html">如何贡献代码</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/deep_model/rnn/index_cn.html">RNN相关模型</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/rnn_config_cn.html">RNN配置</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/recurrent_group_cn.html">Recurrent Group教程</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hierarchical_layer_cn.html">支持双层序列作为输入的Layer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../howto/deep_model/rnn/hrnn_rnn_api_compare_cn.html">单双层RNN API对比介绍</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../howto/optimization/gpu_profiling_cn.html">GPU性能分析与调优</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../api/index_cn.html">API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/model_configs.html">模型配置</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/activation.html">Activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/layer.html">Layers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/evaluators.html">Evaluators</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/optimizer.html">Optimizer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/pooling.html">Pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/networks.html">Networks</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../api/v2/config/attr.html">Parameter Attribute</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/data.html">数据访问</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../api/v2/run_logic.html">训练与应用</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index_cn.html">FAQ</a></li>
</ul>
</nav>
<section class="doc-content-wrap">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li>Design Doc: Save Model</li>
</ul>
</div>
<div class="wy-nav-content" id="doc-content">
<div class="rst-content">
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="design-doc-save-model">
<span id="design-doc-save-model"></span><h1>Design Doc: Save Model<a class="headerlink" href="#design-doc-save-model" title="永久链接至标题"></a></h1>
<div class="section" id="overview">
<span id="overview"></span><h2>Overview<a class="headerlink" href="#overview" title="永久链接至标题"></a></h2>
<p>The model is the output of the training process. There are two
ways from which user can obtain a model:</p>
<ul class="simple">
<li>Save model triggered by user code: user code asks PaddlePaddle to
save a model.</li>
<li>Convert model from the checkpoint: model being converted from
pservers&#8217; periodic checkpoint. In this way, the user can cancel a
job at any time, and still have a relatively fresh model (we
checkpoint around every 5 minutes).</li>
</ul>
<div class="section" id="trainer-saving-model-vs-pservers-saving-model">
<span id="trainer-saving-model-vs-pservers-saving-model"></span><h3>Trainer Saving Model vs. Pservers Saving Model<a class="headerlink" href="#trainer-saving-model-vs-pservers-saving-model" title="永久链接至标题"></a></h3>
<p>Both trainers and pservers have access to the model. So the model can
be saved from a trainer or pservers. We need to decide where the model
is saved from.</p>
<div class="section" id="dense-update-vs-sparse-update">
<span id="dense-update-vs-sparse-update"></span><h4>Dense Update vs. Sparse Update<a class="headerlink" href="#dense-update-vs-sparse-update" title="永久链接至标题"></a></h4>
<p>There are two types of model update methods: dense update and sparse
update (when the model parameter is configured to be sparse).</p>
<ul>
<li><p class="first">Dense update</p>
<p>Every trainer has it&#8217;s own full copy of the model. Every model
update will update the entire model.</p>
</li>
<li><p class="first">Sparse update</p>
<p>The training input is sparse, and the trainer does not have the
entire model. It will only download the sub-model necessary related
to the input. When updating the model, only the sub-model related to
the training input is updated.</p>
</li>
</ul>
</div>
<div class="section" id="pservers-saving-model">
<span id="pservers-saving-model"></span><h4>Pservers Saving Model<a class="headerlink" href="#pservers-saving-model" title="永久链接至标题"></a></h4>
<p>The benefit of letting pservers save model is they have the entire
model all the time. However, since pservers are on different nodes, it
requires a merging process to merge model shards into the same
model. Thus requires the pservers to write models to a distributed
filesystem, making the checkpoint shards visible to the merge program.</p>
</div>
<div class="section" id="trainer-saving-model">
<span id="trainer-saving-model"></span><h4>Trainer Saving Model<a class="headerlink" href="#trainer-saving-model" title="永久链接至标题"></a></h4>
<p>The benefit of letting one trainer to save the model is it does not
require a distributed filesystem. And it&#8217;s reusing the same save model
logic when training locally - except when doing sparse update, the
trainer needs to download the entire model during the saving process.</p>
</div>
<div class="section" id="conclusion">
<span id="conclusion"></span><h4>Conclusion<a class="headerlink" href="#conclusion" title="永久链接至标题"></a></h4>
<p>Given trainer saving model does not require a distributed filesystem,
and is an intuitive extension to trainer saving model when training
locally, we decide to let the trainer save the model when doing
distributed training.</p>
</div>
</div>
<div class="section" id="convert-model-from-checkpoint">
<span id="convert-model-from-checkpoint"></span><h3>Convert Model from Checkpoint<a class="headerlink" href="#convert-model-from-checkpoint" title="永久链接至标题"></a></h3>
<p>TODO</p>
</div>
</div>
<div class="section" id="timeline">
<span id="timeline"></span><h2>Timeline<a class="headerlink" href="#timeline" title="永久链接至标题"></a></h2>
<p>We first implement trainer save the model. Converting the latest
snapshot to a model will be a TODO for future.</p>
</div>
<div class="section" id="trainer-save-model">
<span id="trainer-save-model"></span><h2>Trainer Save Model<a class="headerlink" href="#trainer-save-model" title="永久链接至标题"></a></h2>
<div class="section" id="trainer-election">
<span id="trainer-election"></span><h3>Trainer Election<a class="headerlink" href="#trainer-election" title="永久链接至标题"></a></h3>
<p>One trainer will be elected as the one to save the model. When using
etcd, trainer ID is a randomly generated UUID, we will utilize etcd to
elect one trainer. When not using etcd, unique trainer IDs will be
given by the administrator, the trainer whose ID is &#8220;0&#8221; is elected to
save the model.</p>
</div>
<div class="section" id="model-save-path">
<span id="model-save-path"></span><h3>Model Save Path<a class="headerlink" href="#model-save-path" title="永久链接至标题"></a></h3>
<p>Each trainer will be given the directory to save the model. The
elected trainer will save the model to
<code class="docutils literal"><span class="pre">given-directory/trainerID</span></code>. Since the trainer ID is unique, this
would prevent concurrent save to the same file when multiple trainers
are elected to save the model when split-brain problem happens.</p>
</div>
<div class="section" id="what-happens-when-model-is-saving">
<span id="what-happens-when-model-is-saving"></span><h3>What Happens When Model Is Saving<a class="headerlink" href="#what-happens-when-model-is-saving" title="永久链接至标题"></a></h3>
<p>It takes some time to save model, we need to define what will happen
when save model is taking place.</p>
<p>When doing dense update, the trainer uses the local model. Pservers
does not need to pause model update.</p>
<p>When doing sparse update. The trainer needs to download the entire
model while saving. To get the most accurate model, the model update
needs to be paused before the download starts and resumed after the
download finishes. Otherwise, the trainer gets a model that is
&#8220;polluted&#8221;: some part of the model is old, some part of the model is
new.</p>
<p>It&#8217;s unclear that the &#8220;polluted&#8221; model will be inferior due to the
stochastic nature of deep learning, and pausing the model update will
add more complexity to the system. Since supporting sparse update is a
TODO item. We defer the evaluation of pause the model update or not
during saving model to the future.</p>
</div>
</div>
</div>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2016, PaddlePaddle developers.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'../../',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ".txt",
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="../../_static/translations.js"></script>
<script type="text/javascript" src="https://cdn.bootcss.com/mathjax/2.7.0/MathJax.js"></script>
<script type="text/javascript" src="../../_static/js/theme.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/js/perfect-scrollbar.jquery.min.js"></script>
<script src="../../_static/js/paddle_doc_init.js"></script>
</body>
</html>
\ No newline at end of file
此差异已折叠。
...@@ -413,7 +413,7 @@ trainer.train<span class="o">(</span> ...@@ -413,7 +413,7 @@ trainer.train<span class="o">(</span>
<span class="c1"># define training dataset reader</span> <span class="c1"># define training dataset reader</span>
<span class="k">def</span> <span class="nf">train_reader</span><span class="p">():</span> <span class="k">def</span> <span class="nf">train_reader</span><span class="p">():</span>
<span class="n">train_x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">],</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">2</span><span class="p">]])</span> <span class="n">train_x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">],</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">2</span><span class="p">]])</span>
<span class="n">train_y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">3</span><span class="p">,</span> <span class="o">-</span><span class="mi">7</span><span class="p">,</span> <span class="o">-</span><span class="mi">7</span><span class="p">])</span> <span class="n">train_y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="o">-</span><span class="mi">2</span><span class="p">],</span> <span class="p">[</span><span class="o">-</span><span class="mi">3</span><span class="p">],</span> <span class="p">[</span><span class="o">-</span><span class="mi">7</span><span class="p">],</span> <span class="p">[</span><span class="o">-</span><span class="mi">7</span><span class="p">]])</span>
<span class="k">def</span> <span class="nf">reader</span><span class="p">():</span> <span class="k">def</span> <span class="nf">reader</span><span class="p">():</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">xrange</span><span class="p">(</span><span class="n">train_y</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">xrange</span><span class="p">(</span><span class="n">train_y</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span>
......
此差异已折叠。
.ai-ecology {
display: -webkit-box;
display: -webkit-flex;
display: flex;
}
.ai-ecology-item {
-webkit-box-flex: 1;
-webkit-flex: 1;
flex: 1;
position: relative;
padding-bottom: 50%;
}
.ai-ecology-main {
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
}
.ai-ecology-main-paddle {
top: 50%;
-webkit-transform: translateY(-50%);
transform: translateY(-50%);
}
.ai-ecology-title,
.ai-ecology-intro {
text-align: center;
color: #fff;
}
.ai-ecology-title {
margin-bottom: .5rem;
padding: 0.2rem;
line-height: 1;
font-size: 1.7rem;
}
.ai-ecology-title-paddle {
font-size: 2rem;
}
.ai-ecology-intro {
line-height: 1;
font-size: 1rem;
color: #ccc;
letter-spacing: 1px;
}
.ai-ecology-intro-paddle {
font-size: 1rem;
color: #fff;
margin-bottom: 1.5rem;
}
.ai-ecology-mask {
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
background: rgba(35, 35, 35, 0.75);
}
.ai-ecology-mask-paddle {
background: #000000;
opacity: 0.4;
}
.ai-ecology-icon {
width: 3.75rem;
height: 3.75rem;
margin: 25% auto .875rem;
background-size: contain;
}
.ai-ecology-btn {
display: inline-block;
font-size: 1.2rem;
color: #fff;
line-height: 3rem;
background: rgba(0, 0, 0, 0.5);
border: 1px solid #fff;
border-radius: 100px;
}
.ai-ecology-btn-content {
margin: 0 1rem;
}
.ai-ecology-btn-icon {
width: 1.2rem;
height: 1.2rem;
}
.ai-ecology {
display: -webkit-box;
display: -webkit-flex;
display: flex;
}
.ai-ecology-item {
-webkit-box-flex: 1;
-webkit-flex: 1;
flex: 1;
position: relative;
padding-bottom: 50%;
}
.ai-ecology-main {
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
}
.ai-ecology-main-paddle {
top: 50%;
-webkit-transform: translateY(-50%);
transform: translateY(-50%);
}
.ai-ecology-title,
.ai-ecology-intro {
text-align: center;
color: #fff;
}
.ai-ecology-title {
margin-bottom: .5rem;
padding: 0.2rem;
line-height: 1;
font-size: 1.7rem;
}
.ai-ecology-title-paddle {
font-size: 2rem;
}
.ai-ecology-intro {
line-height: 1;
font-size: 1rem;
color: #ccc;
letter-spacing: 1px;
}
.ai-ecology-intro-paddle {
font-size: 1rem;
color: #fff;
margin-bottom: 1.5rem;
}
.ai-ecology-mask {
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
background: rgba(35, 35, 35, 0.75);
}
.ai-ecology-mask-paddle {
background: #000000;
opacity: 0.4;
}
.ai-ecology-icon {
width: 3.75rem;
height: 3.75rem;
margin: 25% auto .875rem;
background-size: contain;
}
.ai-ecology-btn {
display: inline-block;
font-size: 1.2rem;
color: #fff;
line-height: 3rem;
background: rgba(0, 0, 0, 0.5);
border: 1px solid #fff;
border-radius: 100px;
}
.ai-ecology-btn-content {
margin: 0 1rem;
}
.ai-ecology-btn-icon {
width: 1.2rem;
height: 1.2rem;
}
此差异已折叠。
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" >
<svg xmlns="http://www.w3.org/2000/svg">
<metadata>Generated by IcoMoon</metadata>
<defs>
<font id="icomoon" horiz-adv-x="1024">
<font-face units-per-em="1024" ascent="960" descent="-64" />
<missing-glyph horiz-adv-x="1024" />
<glyph unicode="&#x20;" horiz-adv-x="512" d="" />
<glyph unicode="&#xe900;" glyph-name="down-arrow" d="M72.546 638.514l59.459 59.458 384.424-384.422 387.842 387.824 59.459-59.461-447.301-447.285z" />
<glyph unicode="&#xe901;" glyph-name="correct" horiz-adv-x="1051" d="M100.585 561.15l195.508-197.741c0 0 70.080 123.383 210.13 267.131s334.299 329.46 334.299 329.46v-293.844c0 0-162.921-109.057-305.645-249.321s-219.682-276.035-219.682-276.035c0 0-3.677 4.232-66.86 71.235s-248.336 258.226-248.336 258.226z" />
<glyph unicode="&#xe9bd;" glyph-name="menu" d="M64 768h896v-192h-896zM64 512h896v-192h-896zM64 256h896v-192h-896z" />
<glyph unicode="&#xea10;" glyph-name="checkmark" d="M864 832l-480-480-224 224-160-160 384-384 640 640z" />
</font></defs></svg>
\ No newline at end of file
<!doctype html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>PaddlePaddle</title>
<link rel="stylesheet" href="css/vendor.css">
<link rel="stylesheet" href="css/index.cn.css">
</head>
<body>
<script type="text/javascript">
function browserRedirect() {
var sUserAgent= navigator.userAgent.toLowerCase();
var bIsIpad= sUserAgent.match(/ipad/i) == "ipad";
var bIsIphoneOs= sUserAgent.match(/iphone os/i) == "iphone os";
var bIsMidp= sUserAgent.match(/midp/i) == "midp";
var bIsUc7= sUserAgent.match(/rv:1.2.3.4/i) == "rv:1.2.3.4";
var bIsUc= sUserAgent.match(/ucweb/i) == "ucweb";
var bIsAndroid= sUserAgent.match(/android/i) == "android";
var bIsCE= sUserAgent.match(/windows ce/i) == "windows ce";
var bIsWM= sUserAgent.match(/windows mobile/i) == "windows mobile";
if (bIsIpad || bIsIphoneOs || bIsMidp || bIsUc7 || bIsUc || bIsAndroid || bIsCE || bIsWM) {
} else {
window.location.href = 'http://www.paddlepaddle.org/index.html';
}
}
browserRedirect();
</script>
<div id="app" data-server-rendered="true" class="ai-app"><div class="ai-top-nav"><img src="images/home/LOGO.png" class="ai-top-nav-logo"> <span class="ai-top-nav-title">PaddlePaddle</span> <span class="ai-top-nav-item"><a href="index.html">English</a></span></div> <div class="ai-main"><div class="ai-head-poster"><div class="ai-head-poster-mask"></div> <img src="images/home/header-bg-1.png" alt="" class="ai-head-poster-bg"> <div class="ai-head-poster-content"><div class="ai-head-poster-heading">易学易用的分布式深度学习平台</div> <div class="ai-head-poster-intro">正在为100+项产品提供深度学习算法支持</div> <div style="text-align:center;margin-bottom:1.8rem;"><a href="https://github.com/PaddlePaddle/Paddle"><div class="ai-ecology-btn" style="text-align:center;font-size:1.2rem;color:#fff;line-height:3rem;background:rgba(0,0,0,0.50);border-color:#fff;"><div class="ai-ecology-btn-content"><img src="images/home/githublogo.png" class="ai-ecology-btn-icon">
Fork me on Github
</div></div></a></div></div></div> <div class="ai-head-more"><div class="ai-head-more-heading">丰富的算法服务</div> <div class="ai-head-more-intro">易用、高效、灵活、扩展性好</div> <!----></div> <div class="ai-application"><div class="ai-application-wrap"><div class="ai-application-item"><img src="images/home/paddle-use-01.png"> <div class="ai-application-item-heading">机器视觉</div> <div class="ai-application-item-wrap"><div class="ai-application-item-intro">卷积神经网络可以识别图像中的主要对象,并输出分类结果</div></div> <div class="ai-application-item-more"><a href="http://book.paddlepaddle.org/03.image_classification/index.cn.html">查看更多 &gt;</a></div></div></div><div class="ai-application-wrap"><div class="ai-application-item"><img src="images/home/paddle-use-02.png"> <div class="ai-application-item-heading">自然语言理解</div> <div class="ai-application-item-wrap"><div class="ai-application-item-intro">利用LSTM网络从IMDB电影评论的中分析出评论者情绪的正面和负面</div></div> <div class="ai-application-item-more"><a href="http://book.paddlepaddle.org/06.understand_sentiment/index.cn.html">查看更多 &gt;</a></div></div></div><div class="ai-application-wrap"><div class="ai-application-item"><img src="images/home/paddle-use-03.png"> <div class="ai-application-item-heading">搜索引擎排序</div> <div class="ai-application-item-wrap"><div class="ai-application-item-intro">分析用户特征、电影特征、点评分数,预测新用户对不同电影的点评分数</div></div> <div class="ai-application-item-more"><a href="http://book.paddlepaddle.org/05.recommender_system/index.cn.html">查看更多 &gt;</a></div></div></div></div> <div class="ai-advantage"><div class="ai-head-more"><div class="ai-head-more-heading">Technology and Service Anvantages</div> <div class="ai-head-more-intro"></div> <!----></div> <div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-01.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
易用性
</div> <div class="ai-advantage-item-desc">
为用户提供了直观、灵活的数据接口和模型配置接口
</div></div></div><div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-02.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
灵活性
</div> <div class="ai-advantage-item-desc">
支持CNN、RNN等多种神经网络结构和优化算法。简单书写配置文件即可实现复杂模型
</div></div></div><div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-03.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
高效性
</div> <div class="ai-advantage-item-desc">
在计算、存储、通信、架构等方面都做了高效优化,充分发挥各种资源的性能
</div></div></div><div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-04.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
扩展性
</div> <div class="ai-advantage-item-desc">
全面支持多核、多GPU、多机环境。轻松应对大规模数据训练需求
</div></div></div> <div class="ai-solution-collapse-btn" style="display:none;">
展开全部
<span class="ai-solution-collapse-icon icon-down-arrow"></span></div></div> <div class="ai-head-more"><div class="ai-head-more-heading">现在开始使用PaddlePaddle</div> <div class="ai-head-more-intro">易学易用的分布式深度学习平台</div> <!----></div> <div style="text-align:center;margin-bottom:1.8rem;"><a href="https://github.com/PaddlePaddle/Paddle"><div class="ai-ecology-btn" style="text-align:center;font-size:1.2rem;color:#fff;line-height:3rem;background:#006FEF;"><div class="ai-ecology-btn-content"><img src="images/home/githublogo.png" class="ai-ecology-btn-icon">
Fork me on Github
</div></div></a></div> <div class="ai-footer">©2017 Baidu 使用必读</div></div></div>
<script src="js/vendor.js"></script>
<script src="js/index.cn.js"></script>
</body>
</html>
<!doctype html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>PaddlePaddle</title>
<link rel="stylesheet" href="css/vendor.css">
<link rel="stylesheet" href="css/index.css">
</head>
<body>
<script type="text/javascript">
function browserRedirect() {
var sUserAgent= navigator.userAgent.toLowerCase();
var bIsIpad= sUserAgent.match(/ipad/i) == "ipad";
var bIsIphoneOs= sUserAgent.match(/iphone os/i) == "iphone os";
var bIsMidp= sUserAgent.match(/midp/i) == "midp";
var bIsUc7= sUserAgent.match(/rv:1.2.3.4/i) == "rv:1.2.3.4";
var bIsUc= sUserAgent.match(/ucweb/i) == "ucweb";
var bIsAndroid= sUserAgent.match(/android/i) == "android";
var bIsCE= sUserAgent.match(/windows ce/i) == "windows ce";
var bIsWM= sUserAgent.match(/windows mobile/i) == "windows mobile";
if (bIsIpad || bIsIphoneOs || bIsMidp || bIsUc7 || bIsUc || bIsAndroid || bIsCE || bIsWM) {
} else {
window.location.href = 'http://www.paddlepaddle.org/index.html';
}
}
browserRedirect();
</script>
<div id="app" data-server-rendered="true" class="ai-app"><div class="ai-top-nav"><img src="images/home/LOGO.png" class="ai-top-nav-logo"> <span class="ai-top-nav-title">PaddlePaddle</span> <span class="ai-top-nav-item"><a href="index.cn.html">中文版</a></span></div> <div class="ai-main"><div class="ai-head-poster"><div class="ai-head-poster-mask"></div> <img src="images/home/header-bg-1.png" alt="" class="ai-head-poster-bg"> <div class="ai-head-poster-content"><div class="ai-head-poster-heading">Easy to Learn and Use Distributed Deep Learning Platform</div> <div class="ai-head-poster-intro">Providing deep learning algorithms for 100+ products</div> <div style="text-align:center;margin-bottom:1.8rem;"><a href="https://github.com/PaddlePaddle/Paddle"><div class="ai-ecology-btn" style="text-align:center;font-size:1.2rem;color:#fff;line-height:3rem;background:rgba(0,0,0,0.50);border-color:#fff;"><div class="ai-ecology-btn-content"><img src="images/home/githublogo.png" class="ai-ecology-btn-icon">
Fork me on Github
</div></div></a></div></div></div> <div class="ai-head-more"><div class="ai-head-more-heading">Extensive Algorithmic Service</div> <div class="ai-head-more-intro">Easy to use, efficient, flexible, and scalable</div> <!----></div> <div class="ai-application"><div class="ai-application-wrap"><div class="ai-application-item"><img src="images/home/paddle-use-01.png"> <div class="ai-application-item-heading">Mechine Vision</div> <div class="ai-application-item-wrap"><div class="ai-application-item-intro">The convoluted neural network can identify the main object in the image and output the classification result</div></div> <div class="ai-application-item-more"><a href="http://book.paddlepaddle.org/03.image_classification/index.html">Read more &gt;</a></div></div></div><div class="ai-application-wrap"><div class="ai-application-item"><img src="images/home/paddle-use-02.png"> <div class="ai-application-item-heading">Natural Language Understanding</div> <div class="ai-application-item-wrap"><div class="ai-application-item-intro">Using the LSTM network to analyze the positive and negative aspects of the commenter&#x27;s emotions from IMDB film review</div></div> <div class="ai-application-item-more"><a href="http://book.paddlepaddle.org/06.understand_sentiment/index.html">Read more &gt;</a></div></div></div><div class="ai-application-wrap"><div class="ai-application-item"><img src="images/home/paddle-use-03.png"> <div class="ai-application-item-heading">Search Engine Ranking</div> <div class="ai-application-item-wrap"><div class="ai-application-item-intro">Analyze user characteristics, movie features, rating scores, predict new users&#x27; ratings for different movies</div></div> <div class="ai-application-item-more"><a href="http://book.paddlepaddle.org/05.recommender_system/index.html">Read more &gt;</a></div></div></div></div> <div class="ai-advantage"><div class="ai-head-more"><div class="ai-head-more-heading">Technology and Service Anvantages</div> <div class="ai-head-more-intro"></div> <!----></div> <div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-01.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
Ease of use
</div> <div class="ai-advantage-item-desc">
Provids an intuitive and flexible interface for loading data and specifying model structure.
</div></div></div><div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-02.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
Flexibility
</div> <div class="ai-advantage-item-desc">
Supports CNN, RNN and other neural network. Easy to configure complex models.
</div></div></div><div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-03.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
Efficiency
</div> <div class="ai-advantage-item-desc">
Efficient optimization of computing, memory, communications and architecture.
</div></div></div><div class="ai-advantage-wrap"><div class="ai-advantage-item"><img src="images/home/TASA-ICON-04.png" class="ai-advantage-item-img"> <div class="ai-advantage-item-name">
Scalability
</div> <div class="ai-advantage-item-desc">
Easy to use many CPUs/GPUs and machines to speed up your training and handle large-scale data easily.
</div></div></div> <div class="ai-solution-collapse-btn" style="display:none;">
展开全部
<span class="ai-solution-collapse-icon icon-down-arrow"></span></div></div> <div class="ai-head-more"><div class="ai-head-more-heading">Start Using PaddlePaddle</div> <div class="ai-head-more-intro">Just go to www.paddlepaddle.org on computer</div> <!----></div> <div style="text-align:center;margin-bottom:1.8rem;"><a href="https://github.com/PaddlePaddle/Paddle"><div class="ai-ecology-btn" style="text-align:center;font-size:1.2rem;color:#fff;line-height:3rem;background:#006FEF;"><div class="ai-ecology-btn-content"><img src="images/home/githublogo.png" class="ai-ecology-btn-icon">
Fork me on Github
</div></div></a></div> <div class="ai-footer">©2017 Baidu 使用必读</div></div></div>
<script src="js/vendor.js"></script>
<script src="js/index.js"></script>
</body>
</html>
webpackJsonp([0],{139:function(e,t,n){function o(e){a||n(342)}var a=!1,r=n(18)(n(151),n(357),o,null,null);r.options.__file="/Users/baidu/Documents/codeRep/paddle-homepage/baidu/ai/portal-mobile/src/page/indexCN.vue",r.esModule&&Object.keys(r.esModule).some(function(e){return"default"!==e&&"__"!==e.substr(0,2)})&&console.error("named exports are not supported in *.vue files."),r.options.functional&&console.error("[vue-loader] indexCN.vue: functional components are not supported with templates, they should use render functions."),e.exports=r.exports},151:function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{default:e}}Object.defineProperty(t,"__esModule",{value:!0});var a=n(131),r=o(a),i=n(130),d=o(i),s=n(132),l=o(s),u=n(92),p=o(u),c=n(91),f=o(c),g=n(135),m=o(g),h=n(134),v=o(h),b=n(133),y=o(b),_=n(90),x=o(_),S=n(137),w=o(S),M=n(136),k=o(M),B=n(126),N=n(89);t.default={components:{app:l.default,heading:p.default,headMore:f.default,headPoster:m.default,application:v.default,advantage:y.default,gitButton:x.default,solution:w.default,newsItem:k.default,swipe:r.default,"swipe-item":d.default},data:function(){return{topNav:{item:"English",link:"index.html"},poster:{heading:"易学易用的分布式深度学习平台",intro:"正在为100+项产品提供深度学习算法支持",posterBg:B},headMore0:{heading:"丰富的算法服务",intro:"易用、高效、灵活、扩展性好"},headMore1:{heading:"现在开始使用PaddlePaddle",intro:"易学易用的分布式深度学习平台"},gitButton:{align:"center",margin:1.8,text:"Fork me on Github",imgSrc:N,styleObject:{textAlign:"center",fontSize:"1.2rem",color:"#fff",lineHeight:"3rem",background:"#006FEF"}},moreShowText:"查看更多 >",applications:[{imgSrc:n(127),heading:"机器视觉",intro:"卷积神经网络可以识别图像中的主要对象,并输出分类结果",link:"http://book.paddlepaddle.org/03.image_classification/index.cn.html"},{imgSrc:n(128),heading:"自然语言理解",intro:"利用LSTM网络从IMDB电影评论的中分析出评论者情绪的正面和负面",link:"http://book.paddlepaddle.org/06.understand_sentiment/index.cn.html"},{imgSrc:n(129),heading:"搜索引擎排序",intro:"分析用户特征、电影特征、点评分数,预测新用户对不同电影的点评分数",link:"http://book.paddlepaddle.org/05.recommender_system/index.cn.html"}],advantages:[{src:n(122),name:"易用性",desc:"为用户提供了直观、灵活的数据接口和模型配置接口"},{src:n(123),name:"灵活性",desc:"支持CNN、RNN等多种神经网络结构和优化算法。简单书写配置文件即可实现复杂模型"},{src:n(124),name:"高效性",desc:"在计算、存储、通信、架构等方面都做了高效优化,充分发挥各种资源的性能"},{src:n(125),name:"扩展性",desc:"全面支持多核、多GPU、多机环境。轻松应对大规模数据训练需求"}],swipeSrc:B,gitSrc:N,news:[]}}}},152:function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{default:e}}var a="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e};n(47);var r=n(41),i=o(r),d=n(62),s=o(d),l=n(139),u=o(l);i.default.use(s.default);var p=new i.default(u.default);"object"===("undefined"==typeof window?"undefined":a(window))?p.$mount("#app"):void 0!==e&&e.exports&&(e.exports=p)},342:function(e,t){},357:function(e,t,n){e.exports={render:function(){var e=this,t=e.$createElement,n=e._self._c||t;return n("app",{attrs:{topNavItem:e.topNav.item,topNavLink:e.topNav.link}},[n("head-poster",{attrs:{heading:e.poster.heading,intro:e.poster.intro,posterBg:e.poster.posterBg}}),e._v(" "),n("head-more",{attrs:{heading:e.headMore0.heading,intro:e.headMore0.intro}}),e._v(" "),n("application",{attrs:{applications:e.applications,moreShowText:e.moreShowText}}),e._v(" "),n("advantage",{attrs:{advantages:e.advantages}}),e._v(" "),n("head-more",{attrs:{heading:e.headMore1.heading,intro:e.headMore1.intro}}),e._v(" "),n("git-button",{attrs:{align:e.gitButton.align,margin:e.gitButton.margin,styleObject:e.gitButton.styleObject,imgSrc:e.gitButton.imgSrc,text:e.gitButton.text}})],1)},staticRenderFns:[]},e.exports.render._withStripped=!0}},[152]);
\ No newline at end of file
webpackJsonp([1],{138:function(e,t,n){function o(e){a||n(344)}var a=!1,i=n(18)(n(150),n(359),o,null,null);i.options.__file="/Users/baidu/Documents/codeRep/paddle-homepage/baidu/ai/portal-mobile/src/page/index.vue",i.esModule&&Object.keys(i.esModule).some(function(e){return"default"!==e&&"__"!==e.substr(0,2)})&&console.error("named exports are not supported in *.vue files."),i.options.functional&&console.error("[vue-loader] index.vue: functional components are not supported with templates, they should use render functions."),e.exports=i.exports},150:function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{default:e}}Object.defineProperty(t,"__esModule",{value:!0});var a=n(131),i=o(a),r=n(130),d=o(r),s=n(132),l=o(s),u=n(92),c=o(u),p=n(91),f=o(p),m=n(135),g=o(m),h=n(134),v=o(h),y=n(133),b=o(y),x=n(90),w=o(x),S=n(137),_=o(S),k=n(136),M=o(k),B=n(126),E=n(89);t.default={components:{app:l.default,heading:c.default,headMore:f.default,headPoster:g.default,application:v.default,advantage:b.default,gitButton:w.default,solution:_.default,newsItem:M.default,swipe:i.default,"swipe-item":d.default},data:function(){return{topNav:{item:"中文版",link:"index.cn.html"},poster:{heading:"Easy to Learn and Use Distributed Deep Learning Platform",intro:"Providing deep learning algorithms for 100+ products",posterBg:B},headMore0:{heading:"Extensive Algorithmic Service",intro:"Easy to use, efficient, flexible, and scalable"},headMore1:{heading:"Start Using PaddlePaddle",intro:"Just go to www.paddlepaddle.org on computer"},gitButton:{align:"center",margin:1.8,text:"Fork me on Github",imgSrc:E,styleObject:{textAlign:"center",fontSize:"1.2rem",color:"#fff",lineHeight:"3rem",background:"#006FEF"}},moreShowText:"Read more >",applications:[{imgSrc:n(127),heading:"Mechine Vision",intro:"The convoluted neural network can identify the main object in the image and output the classification result",link:"http://book.paddlepaddle.org/03.image_classification/index.html"},{imgSrc:n(128),heading:"Natural Language Understanding",intro:"Using the LSTM network to analyze the positive and negative aspects of the commenter's emotions from IMDB film review",link:"http://book.paddlepaddle.org/06.understand_sentiment/index.html"},{imgSrc:n(129),heading:"Search Engine Ranking",intro:"Analyze user characteristics, movie features, rating scores, predict new users' ratings for different movies",link:"http://book.paddlepaddle.org/05.recommender_system/index.html"}],advantages:[{src:n(122),name:"Ease of use",desc:"Provids an intuitive and flexible interface for loading data and specifying model structure."},{src:n(123),name:"Flexibility",desc:"Supports CNN, RNN and other neural network. Easy to configure complex models."},{src:n(124),name:"Efficiency",desc:"Efficient optimization of computing, memory, communications and architecture."},{src:n(125),name:"Scalability",desc:"Easy to use many CPUs/GPUs and machines to speed up your training and handle large-scale data easily."}],swipeSrc:B,gitSrc:E,news:[]}}}},153:function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{default:e}}var a="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e};n(47);var i=n(41),r=o(i),d=n(62),s=o(d),l=n(138),u=o(l);r.default.use(s.default);var c=new r.default(u.default);"object"===("undefined"==typeof window?"undefined":a(window))?c.$mount("#app"):void 0!==e&&e.exports&&(e.exports=c)},344:function(e,t){},359:function(e,t,n){e.exports={render:function(){var e=this,t=e.$createElement,n=e._self._c||t;return n("app",{attrs:{topNavItem:e.topNav.item,topNavLink:e.topNav.link}},[n("head-poster",{attrs:{heading:e.poster.heading,intro:e.poster.intro,posterBg:e.poster.posterBg}}),e._v(" "),n("head-more",{attrs:{heading:e.headMore0.heading,intro:e.headMore0.intro}}),e._v(" "),n("application",{attrs:{applications:e.applications,moreShowText:e.moreShowText}}),e._v(" "),n("advantage",{attrs:{advantages:e.advantages}}),e._v(" "),n("head-more",{attrs:{heading:e.headMore1.heading,intro:e.headMore1.intro}}),e._v(" "),n("git-button",{attrs:{align:e.gitButton.align,margin:e.gitButton.margin,styleObject:e.gitButton.styleObject,imgSrc:e.gitButton.imgSrc,text:e.gitButton.text}})],1)},staticRenderFns:[]},e.exports.render._withStripped=!0}},[153]);
\ No newline at end of file
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册