diff --git a/doc/fluid/api/data.rst b/doc/fluid/api/data.rst
new file mode 100644
index 0000000000000000000000000000000000000000..b56c7332cc284649c7e04328e51a7faa78593a39
--- /dev/null
+++ b/doc/fluid/api/data.rst
@@ -0,0 +1,10 @@
+==================================
+Data Reader Interface and DataSets
+==================================
+
+.. toctree::
+ :maxdepth: 1
+
+ data/data_reader.rst
+ data/image.rst
+ data/dataset.rst
diff --git a/doc/fluid/api/data/data_reader.rst b/doc/fluid/api/data/data_reader.rst
new file mode 100644
index 0000000000000000000000000000000000000000..d7c896a6270b488ca4449e5211d0d0879eda6ac5
--- /dev/null
+++ b/doc/fluid/api/data/data_reader.rst
@@ -0,0 +1,72 @@
+=====================
+Data Reader Interface
+=====================
+
+
+DataTypes
+=========
+
+.. autofunction:: paddle.v2.data_type.dense_array
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.integer_value
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.integer_value_sequence
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.integer_value_sub_sequence
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_binary_vector
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_binary_vector_sequence
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_binary_vector_sub_sequence
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_float_vector
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_float_vector_sequence
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_float_vector_sub_sequence
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_non_value_slot
+ :noindex:
+
+.. autofunction:: paddle.v2.data_type.sparse_value_slot
+ :noindex:
+
+.. autoclass:: paddle.v2.data_type.InputType
+ :members:
+ :noindex:
+
+DataFeeder
+==========
+
+.. automodule:: paddle.v2.data_feeder
+ :members:
+ :noindex:
+
+Reader
+======
+
+.. automodule:: paddle.v2.reader
+ :members:
+ :noindex:
+
+.. automodule:: paddle.v2.reader.creator
+ :members:
+ :noindex:
+
+minibatch
+=========
+
+.. automodule:: paddle.v2.minibatch
+ :members:
+ :noindex:
diff --git a/doc/fluid/api/data/dataset.rst b/doc/fluid/api/data/dataset.rst
new file mode 100644
index 0000000000000000000000000000000000000000..e7c8be4452bf55e0967d750c2e624e8e316e9330
--- /dev/null
+++ b/doc/fluid/api/data/dataset.rst
@@ -0,0 +1,82 @@
+Dataset
+=======
+
+.. automodule:: paddle.dataset
+ :members:
+ :noindex:
+
+mnist
++++++
+
+.. automodule:: paddle.dataset.mnist
+ :members:
+ :noindex:
+
+cifar
++++++
+
+.. automodule:: paddle.dataset.cifar
+ :members:
+ :noindex:
+
+conll05
++++++++
+
+.. automodule:: paddle.dataset.conll05
+ :members: get_dict,get_embedding,test
+ :noindex:
+
+imdb
+++++
+
+.. automodule:: paddle.dataset.imdb
+ :members:
+ :noindex:
+
+imikolov
+++++++++
+
+.. automodule:: paddle.dataset.imikolov
+ :members:
+ :noindex:
+
+movielens
++++++++++
+
+.. automodule:: paddle.dataset.movielens
+ :members:
+ :noindex:
+
+.. autoclass:: paddle.dataset.movielens.MovieInfo
+ :noindex:
+
+.. autoclass:: paddle.dataset.movielens.UserInfo
+ :noindex:
+
+sentiment
++++++++++
+
+.. automodule:: paddle.dataset.sentiment
+ :members:
+ :noindex:
+
+uci_housing
++++++++++++
+
+.. automodule:: paddle.dataset.uci_housing
+ :members:
+ :noindex:
+
+wmt14
++++++
+
+.. automodule:: paddle.dataset.wmt14
+ :members:
+ :noindex:
+
+wmt16
++++++
+
+.. automodule:: paddle.dataset.wmt16
+ :members:
+ :noindex:
diff --git a/doc/fluid/api/data/image.rst b/doc/fluid/api/data/image.rst
new file mode 100644
index 0000000000000000000000000000000000000000..97651ffa6be56cf3ecaca2caca38a353fa5c1f49
--- /dev/null
+++ b/doc/fluid/api/data/image.rst
@@ -0,0 +1,5 @@
+Image Interface
+===============
+
+.. automodule:: paddle.v2.image
+ :members:
diff --git a/doc/fluid/api/index_en.rst b/doc/fluid/api/index_en.rst
index b0710d8b19956eb235890fdb2a2d764084416aa5..06c686d9508635abd41571983e00be174e94743e 100644
--- a/doc/fluid/api/index_en.rst
+++ b/doc/fluid/api/index_en.rst
@@ -16,3 +16,4 @@ Fluid
profiler.rst
regularizer.rst
io.rst
+ data.rst
diff --git a/doc/fluid/design/onnx/images/project_structure.png b/doc/fluid/design/onnx/images/project_structure.png
new file mode 100644
index 0000000000000000000000000000000000000000..ab1c2ff23cfff586516876684348bb15bd2084fc
Binary files /dev/null and b/doc/fluid/design/onnx/images/project_structure.png differ
diff --git a/doc/fluid/design/onnx/onnx_convertor.md b/doc/fluid/design/onnx/onnx_convertor.md
new file mode 100644
index 0000000000000000000000000000000000000000..bc1665d7c33eb54cb63e5306a439c1ca67016d1e
--- /dev/null
+++ b/doc/fluid/design/onnx/onnx_convertor.md
@@ -0,0 +1,131 @@
+# Background
+
+[ONNX (Open Neural Network Exchange)](https://github.com/onnx/onnx) bridges different deep learning frameworks by providing an open source graph format for models. The models trained in other frameworks can be converted into the ONNX format to execute inference by utilizing the built-in operators in ONNX - this is called a **frontend**. With the inverse conversion (called a **backend**), different frameworks can share any models supported by ONNX in principle. Now most mainstream frameworks have joined the ONNX community, e.g. Caffe2, PyTorch, and MXNet etc. And there is a momentum driving more and more vendors to begin supporting ONNX or even choose ONNX as the only machine learning runtime in their devices.
+
+Therefore, it is necessary to enable the conversion between PaddlePaddle and ONNX. This design doc is aimed at implementing a convertor, mainly for converting between **Fluid** models and ONNX (it is very likely that we may support older v2 models in the future). A complete convertor should be bidirectional - with a frontend AND a backend, but considering the importance, the we will start with the frontend i.e. Fluid models to ONNX models.
+
+
+# How it works
+
+ONNX has a [working list of operators](https://github.com/onnx/onnx/blob/master/docs/Operators.md) which is versioned.
+
+When prioritizing implementation of a frontend over a backend, choice of coverage of Fluid -> ONNX operators comes down to choices of models to be supported (see section `Supported models`). Eventually, this will allow us to reach a really-wide coverage of all operators.
+
+Here are a few major considerations when it comes to converting models:
+
+- **Op-level conversion**: How to map the inputs, attributes, and outputs of each Paddle operator to those of the ONNX operator. In several cases, these require transformations. For each direction (frontend vs. backend), a different conversion mapping is needed.
+- **Parameters (weights) initialization**: Setting initial parameters on different nodes.
+- **Tensor data type mapping** (Note: Some ONNX data types are not supported in Fluid)
+- **Network representation adaption**: Fluid `ProgramDesc` include nested blocks. Since ONNX is free of nesting, the `ProgramDesc` ops need to be traversed to only include ops from the global scope in the root block. The variables used as inputs and outputs should also be in this scope.
+- **Model validation**: There are two kinds of validations that are necessary:
+ 1. We need to ensure that the inference outputs of the ops in run inside a model are the same as those when running the ONNX converted ops through an alternative ONNX backend.
+ 2. Checking to see if the generated nodes on the graph are validated by the internal ONNX checkers.
+- **Versioning**: ONNX versions its op listing over versions. In fact, it has versioning on 3 different levels: ops, graphs, and ONNX models. This requires that we are conscious about versioning the convertor and updating tests and op convertor logic for each release. It also implies that we release pre-trained ONNX models upon each version release.
+
+One thing that makes this conversion more feasible in Fluid's case is the use of a static IR - the `ProgramDesc` - as opposed to a dynamic graph, as created in the cases of frameworks like PyTorch.
+
+
+# Project structure
+
+
+
+
+
+The project contains four important parts:
+
+* **fluid**: The directory that contains wrappers for fluid related APIs. Fluid has provided some low-level APIs to parse or generate the inference model. However, directly using these low-level APIs makes the code tediously long. This module wraps low-level APIs to provide simplified interfaces.
+
+* **onnx**: This is a Python package provided by ONNX containing helpers for creating nodes, graphs, and eventually binary protobuf models with initializer parameters.
+
+* **onnx_fluid**: Contains two-way mapping (Fluid -> ONNX ops and ONNX -> Fluid ops). Called from `convert.py`, the program uses this mapping along with modifier functions to construct ONNX nodes with the help of ONNX's `make_node` helper. It also contains mapping between datatypes and tensor deprecation / amplification logic.
+
+* **convert.py**: The interface exposed to users. This will traverse the global program blocks/variables and construct the write-able model.
+
+
+# Usage
+The converter should be designed to very easy-to-use. Bidirectional conversion between a Fluid inference model and an ONNX binary model will be supported. Model validation will also provided to verify the correctness of converted model.
+
+* Convert Fluid inference model to ONNX binary model
+
+ ```
+ python convert.py --fluid_model --onnx_model validate True
+ ```
+
+* Validate the converted model
+
+ ```
+ python validate.py --fluid_model --onnx_model
+ ```
+
+The conversion and model validation will be completed consecutively, finally output a readable model structure description. And for the converse conversion, users only need to exchange the input and output.
+
+
+# Challenges and mitigation
+
+## Cycles
+
+Cycles are unsupported in ONNX. In Paddle, the `while` op is the most prominent example of a cycle.
+
+*Resolution*: We won't support models with `while`s which can't be substituted until ONNX adds support for such ops.
+
+## Sequences
+
+Sequence processing operators like `sequence_expand`, `sequence_reshape`, `sequence_concat`, and `sequence_pool` are not supported by ONNX as well, because they do not support non-padded datatypes like LoDTensors.
+
+*Resolution*: Since the runtimes using our ONNX exported graphs won't be using LoDTensors in the first place, such sequence operators should be mapped to ONNX ops that will do the necessary transposing ops with the knowledge of the padding and shape of the Tensors.
+
+## Ops that can't easily be mapped
+
+There are ops that just aren't possible to map today:
+
+**Control flow operators**
+
+Paddle supports control flow ops like `If/Else` and `Switch` (if we ignore the CSP operations like `select` for now). ONNX has `If` support in the experimental phase.
+
+*Resolution*: Map Paddle's `If/Else` to ONNX's `If`, but ignore other control flow operators until ONNX brings support for them.
+
+
+**Non-existent in Fluid**
+
+There are several ONNX operators that are not available in Fluid today, e.g. `InstanceNormalization`, `RandomUniform`, `Unsqueeze`, etc.
+
+*Resolution*: For the initial phase, we can choose to not support ops that our models don't care for and are subsequently not available in Fluid. However, for ops that we think might be necessary for Fluid users also, we must implement them on our side and support the ONNX conversion to them. This list is TBD.
+
+
+**Concurrency**
+
+ONNX does not have any considerations for concurrency right now.
+
+*Resolution*: There are two ways to approach this:
+
+a. We choose to not support concurrent models.
+b. We only support `go_op`s (basically threads) shallowly. This could mean that we enqueue `go_op` ops prior to gradient calculations OR even prior to the entire graph, and that's it - since `go_op`s do not have support for backprop anyways. One of the core target use cases of `go_op`: batch reading - can be handled through this approach.
+
+
+**Overloaded in Fluid**
+
+There are ops in ONNX whose job can't be accomplished by a single corresponding Paddle operator (e.g. ), but a collection of operators.
+
+*Resolution*: Chain multiple Paddle operators.
+
+
+## Lack of LoDTensors
+
+As stated above, ONNX only supports simple Tensor values.
+
+*Resolution*: Deprecate to plain old numpy-able tensors.
+
+
+## Reconstruction from deprecated ONNX ops
+
+For higher-level Fluid ops, such as a few offered by the `nn` layer that do not have direct corresponding mappings but can be converted to ONNX by chaining a series of ops without cycles, it would be useful to map them back to the higher-level Fluid ops once converted back from the deprecated ONNX graphs.
+
+*Resolution*: Graphs that have the deprecation from Paddle -> ONNX. When converting back from ONNX, if we encounter the identical graphs by doing a forward search, we can replace the subgraphs with the matching ONNX op.
+
+
+# Supported models
+
+As mentioned above, potential risks may come from the conversion of sequence-related models, including the LodTensor, ```if/else``` and ```while``` operator. So a good choice is to focus on some important feedforward models first, then implement some simple recurrent models.
+
+- Feedforward models: common models selected in PaddleBook, e.g. VGG, ResNet and some other models proposed by application teams.
+- Recurrent models: language model, stacked LSTMs etc.
diff --git a/doc/v2/howto/cluster/multi_cluster/index_en.rst b/doc/v2/howto/cluster/multi_cluster/index_en.rst
index dac7aaef085c80851c1bbb89250faf2151de4ca6..b69bd5b2dbf1967d65558da06812d76f431c1d5a 100644
--- a/doc/v2/howto/cluster/multi_cluster/index_en.rst
+++ b/doc/v2/howto/cluster/multi_cluster/index_en.rst
@@ -1,19 +1,35 @@
Use different clusters
======================
-PaddlePaddle supports running jobs on several platforms including:
-- `Kubernetes `_ open-source system for automating deployment, scaling, and management of containerized applications from Google.
-- `OpenMPI `_ Mature high performance parallel computing framework.
-- `Fabric `_ A cluster management tool. Write scripts to submit jobs or manage the cluster.
+The user's cluster environment is not the same. To facilitate everyone's deployment, we provide a variety of cluster deployment methods to facilitate the submission of cluster training tasks, which will be introduced as follows:
-We'll introduce cluster job management on these platforms. The examples can be found under `cluster_train_v2 `_ .
+`Kubernetes `_ is a scheduling framework of Google open source container cluster, supporting a complete cluster solution for large-scale cluster production environment. The following guidelines show PaddlePaddle's support for Kubernetes:
-These cluster platforms provide API or environment variables for training processes, when the job is dispatched to different nodes. Like node ID, IP or total number of nodes etc.
+.. toctree::
+ :maxdepth: 1
+
+ k8s_cn.md
+ k8s_distributed_cn.md
+
+`OpenMPI `_ is a mature high-performance parallel computing framework, which is widely used in the field of HPC. The following guide describes how to use OpenMPI to build PaddlePaddle's cluster training task:
.. toctree::
:maxdepth: 1
- fabric_en.md
- openmpi_en.md
- k8s_en.md
- k8s_aws_en.md
+ openmpi_cn.md
+
+`Fabric `_ is a convenient tool for program deployment and management. We provide a way to deploy and manage with Fabric. If you want to know more about it, please read the following guidelines:
+
+.. toctree::
+ :maxdepth: 1
+
+ fabric_cn.md
+
+We also support the deployment of PaddlePaddle on AWS. Learn more about:
+
+.. toctree::
+ :maxdepth: 1
+
+ k8s_aws_cn.md
+
+The examples can be found under `cluster_train_v2 `_ .
\ No newline at end of file
diff --git a/paddle/fluid/framework/blocking_queue.h b/paddle/fluid/framework/blocking_queue.h
new file mode 100644
index 0000000000000000000000000000000000000000..a19558c0ae59005bee575e8c469c7f95d8780ab1
--- /dev/null
+++ b/paddle/fluid/framework/blocking_queue.h
@@ -0,0 +1,74 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License. */
+
+#pragma once
+
+#include // NOLINT
+#include
+#include // NOLINT
+#include
+
+namespace paddle {
+namespace framework {
+
+template
+class BlockingQueue {
+ public:
+ void Push(const T &item) {
+ {
+ std::lock_guard g(mutex_);
+ q_.emplace_back(item);
+ }
+ cv_.notify_one();
+ }
+
+ template
+ void Extend(const U &items) {
+ {
+ std::lock_guard g(mutex_);
+ for (auto &item : items) {
+ q_.emplace_back(item);
+ }
+ }
+ cv_.notify_all();
+ }
+
+ std::deque PopAll(size_t ms, bool *timeout) {
+ auto time =
+ std::chrono::system_clock::now() + std::chrono::milliseconds(ms);
+ std::unique_lock lock(mutex_);
+ *timeout = !cv_.wait_until(lock, time, [this] { return !q_.empty(); });
+ std::deque ret;
+ if (!*timeout) {
+ std::swap(ret, q_);
+ }
+ return ret;
+ }
+
+ T Pop() {
+ std::unique_lock lock(mutex_);
+ cv_.wait(lock, [=] { return !q_.empty(); });
+ T rc(std::move(q_.front()));
+ q_.pop_front();
+ return rc;
+ }
+
+ private:
+ std::mutex mutex_;
+ std::condition_variable cv_;
+ std::deque q_;
+};
+
+} // namespace framework
+} // namespace paddle
diff --git a/paddle/fluid/framework/data_transform.cc b/paddle/fluid/framework/data_transform.cc
index bfad9ac1e9cad1936ed961ad1da55787d2faa23e..9c277a27da5af34fc9fb18ca073e369c05ecdf22 100644
--- a/paddle/fluid/framework/data_transform.cc
+++ b/paddle/fluid/framework/data_transform.cc
@@ -63,16 +63,16 @@ void DataTransform(const OpKernelType& expected_kernel_type,
}
void CopyVariableWithTensor(const Variable& in_var, const Tensor& tensor,
- Variable& out_var) {
+ Variable* out_var) {
if (in_var.IsType()) {
auto& in_lod_tensor = in_var.Get();
- auto* tran_lod_tensor = out_var.GetMutable();
+ auto* tran_lod_tensor = out_var->GetMutable();
tran_lod_tensor->set_lod(in_lod_tensor.lod());
tran_lod_tensor->set_layout(in_lod_tensor.layout());
tran_lod_tensor->ShareDataWith(tensor);
} else if (in_var.IsType()) {
auto& in_selected_rows = in_var.Get();
- auto* trans_selected_rows = out_var.GetMutable();
+ auto* trans_selected_rows = out_var->GetMutable();
trans_selected_rows->set_height(in_selected_rows.height());
trans_selected_rows->set_rows(in_selected_rows.rows());
trans_selected_rows->mutable_value()->ShareDataWith(tensor);
diff --git a/paddle/fluid/framework/data_transform.h b/paddle/fluid/framework/data_transform.h
index 9ec67e6f3d6358cd658e198602f5e802a0ba4cc9..dee5d8c7c1126013742460df1d94bb364220ad09 100644
--- a/paddle/fluid/framework/data_transform.h
+++ b/paddle/fluid/framework/data_transform.h
@@ -35,7 +35,7 @@ void DataTransform(const OpKernelType& expected_kernel_type,
const Tensor& input_tensor, Tensor* out);
void CopyVariableWithTensor(const Variable& in_var, const Tensor& tensor,
- Variable& out_var);
+ Variable* out_var);
} // namespace framework
} // namespace paddle
diff --git a/paddle/fluid/framework/details/threaded_ssa_graph_executor.h b/paddle/fluid/framework/details/threaded_ssa_graph_executor.h
index d70bbd4ef0eb02d1b473bf88e526996819aec5f9..d089b79d91327e38408439a8019ec5189ff6d189 100644
--- a/paddle/fluid/framework/details/threaded_ssa_graph_executor.h
+++ b/paddle/fluid/framework/details/threaded_ssa_graph_executor.h
@@ -22,6 +22,7 @@
#include
#include "ThreadPool.h" // ThreadPool in thrird party
+#include "paddle/fluid/framework/blocking_queue.h"
#include "paddle/fluid/framework/details/ssa_graph_executor.h"
namespace paddle {
@@ -30,46 +31,6 @@ class Scope;
namespace details {
-template
-class BlockingQueue {
- public:
- void Push(const T &item) {
- {
- std::lock_guard g(mutex_);
- q_.emplace_back(item);
- }
- cv_.notify_one();
- }
-
- template
- void Extend(const U &items) {
- {
- std::lock_guard g(mutex_);
- for (auto &item : items) {
- q_.emplace_back(item);
- }
- }
- cv_.notify_all();
- }
-
- std::deque PopAll(size_t ms, bool *timeout) {
- auto time =
- std::chrono::system_clock::now() + std::chrono::milliseconds(ms);
- std::unique_lock lock(mutex_);
- *timeout = !cv_.wait_until(lock, time, [this] { return !q_.empty(); });
- std::deque ret;
- if (!*timeout) {
- std::swap(ret, q_);
- }
- return ret;
- }
-
- private:
- std::mutex mutex_;
- std::condition_variable cv_;
- std::deque q_;
-};
-
class ThreadedSSAGraphExecutor : public SSAGraphExecutor {
public:
ThreadedSSAGraphExecutor(size_t num_threads, bool use_event,
diff --git a/paddle/fluid/framework/executor.cc b/paddle/fluid/framework/executor.cc
index 513e720fd099bcd898a6c73afd1a3a16f6f53aab..766bf0ab0c1c50146ad3f6e048738209428707b9 100644
--- a/paddle/fluid/framework/executor.cc
+++ b/paddle/fluid/framework/executor.cc
@@ -226,15 +226,15 @@ static bool has_fetch_operators(
}
void Executor::Run(const ProgramDesc& program, Scope* scope,
- std::map& feed_targets,
- std::map& fetch_targets,
+ std::map* feed_targets,
+ std::map* fetch_targets,
bool create_vars, const std::string& feed_holder_name,
const std::string& fetch_holder_name) {
platform::RecordBlock b(kProgramId);
bool has_feed_ops =
- has_feed_operators(program.Block(0), feed_targets, feed_holder_name);
+ has_feed_operators(program.Block(0), *feed_targets, feed_holder_name);
bool has_fetch_ops =
- has_fetch_operators(program.Block(0), fetch_targets, fetch_holder_name);
+ has_fetch_operators(program.Block(0), *fetch_targets, fetch_holder_name);
ProgramDesc* copy_program = const_cast(&program);
if (!has_feed_ops || !has_fetch_ops) {
@@ -250,7 +250,7 @@ void Executor::Run(const ProgramDesc& program, Scope* scope,
feed_holder->SetPersistable(true);
int i = 0;
- for (auto& feed_target : feed_targets) {
+ for (auto& feed_target : (*feed_targets)) {
std::string var_name = feed_target.first;
VLOG(3) << "feed target's name: " << var_name;
@@ -273,7 +273,7 @@ void Executor::Run(const ProgramDesc& program, Scope* scope,
fetch_holder->SetPersistable(true);
int i = 0;
- for (auto& fetch_target : fetch_targets) {
+ for (auto& fetch_target : (*fetch_targets)) {
std::string var_name = fetch_target.first;
VLOG(3) << "fetch target's name: " << var_name;
@@ -361,16 +361,16 @@ void Executor::RunPreparedContext(ExecutorPrepareContext* ctx, Scope* scope,
void Executor::RunPreparedContext(
ExecutorPrepareContext* ctx, Scope* scope,
- std::map& feed_targets,
- std::map& fetch_targets, bool create_vars,
+ std::map* feed_targets,
+ std::map* fetch_targets, bool create_vars,
const std::string& feed_holder_name, const std::string& fetch_holder_name) {
auto& global_block = ctx->prog_.Block(ctx->block_id_);
PADDLE_ENFORCE(
- has_feed_operators(global_block, feed_targets, feed_holder_name),
+ has_feed_operators(global_block, *feed_targets, feed_holder_name),
"Program in ExecutorPrepareContext should has feed_ops.");
PADDLE_ENFORCE(
- has_fetch_operators(global_block, fetch_targets, fetch_holder_name),
+ has_fetch_operators(global_block, *fetch_targets, fetch_holder_name),
"Program in the prepared context should has fetch_ops.");
// map the data of feed_targets to feed_holder
@@ -378,8 +378,8 @@ void Executor::RunPreparedContext(
if (op->Type() == kFeedOpType) {
std::string feed_target_name = op->Output("Out")[0];
int idx = boost::get(op->GetAttr("col"));
- SetFeedVariable(scope, *feed_targets[feed_target_name], feed_holder_name,
- idx);
+ SetFeedVariable(scope, *(*feed_targets)[feed_target_name],
+ feed_holder_name, idx);
}
}
@@ -390,7 +390,7 @@ void Executor::RunPreparedContext(
if (op->Type() == kFetchOpType) {
std::string fetch_target_name = op->Input("X")[0];
int idx = boost::get(op->GetAttr("col"));
- *fetch_targets[fetch_target_name] =
+ *(*fetch_targets)[fetch_target_name] =
GetFetchVariable(*scope, fetch_holder_name, idx);
}
}
diff --git a/paddle/fluid/framework/executor.h b/paddle/fluid/framework/executor.h
index 43defdacf2a1c2f59cf3af2461ae6cfc4c61f5be..4a3d637e2d79f8cbd83412eea2d73e4b497ef1e7 100644
--- a/paddle/fluid/framework/executor.h
+++ b/paddle/fluid/framework/executor.h
@@ -55,8 +55,8 @@ class Executor {
bool create_local_scope = true, bool create_vars = true);
void Run(const ProgramDesc& program, Scope* scope,
- std::map& feed_targets,
- std::map& fetch_targets,
+ std::map* feed_targets,
+ std::map* fetch_targets,
bool create_vars = true,
const std::string& feed_holder_name = "feed",
const std::string& fetch_holder_name = "fetch");
@@ -74,8 +74,8 @@ class Executor {
bool create_vars = true);
void RunPreparedContext(ExecutorPrepareContext* ctx, Scope* scope,
- std::map& feed_targets,
- std::map& fetch_targets,
+ std::map* feed_targets,
+ std::map* fetch_targets,
bool create_vars = true,
const std::string& feed_holder_name = "feed",
const std::string& fetch_holder_name = "fetch");
diff --git a/paddle/fluid/framework/op_desc.cc b/paddle/fluid/framework/op_desc.cc
index 46c834b38b758a2e050d990a464600154cbe51e5..076c45713015797f86a3611dd333132bae40044d 100644
--- a/paddle/fluid/framework/op_desc.cc
+++ b/paddle/fluid/framework/op_desc.cc
@@ -205,8 +205,8 @@ void OpDesc::SetAttr(const std::string &name, const Attribute &v) {
need_update_ = true;
}
-void OpDesc::SetBlockAttr(const std::string &name, BlockDesc &block) {
- this->attrs_[name] = █
+void OpDesc::SetBlockAttr(const std::string &name, BlockDesc *block) {
+ this->attrs_[name] = block;
need_update_ = true;
}
diff --git a/paddle/fluid/framework/op_desc.h b/paddle/fluid/framework/op_desc.h
index cd6777e60a8e354ac634ba1c1fe5db63539f6e93..3ee36a47c156da67a9ff70852665fbbd464bea17 100644
--- a/paddle/fluid/framework/op_desc.h
+++ b/paddle/fluid/framework/op_desc.h
@@ -14,6 +14,7 @@ limitations under the License. */
#pragma once
+#include
#include
#include
#include "paddle/fluid/framework/attribute.h"
@@ -73,7 +74,7 @@ class OpDesc {
void SetAttr(const std::string &name, const Attribute &v);
- void SetBlockAttr(const std::string &name, BlockDesc &block);
+ void SetBlockAttr(const std::string &name, BlockDesc *block);
Attribute GetAttr(const std::string &name) const;
diff --git a/paddle/fluid/framework/operator.cc b/paddle/fluid/framework/operator.cc
index f97bd0827428feeb590fcad16c48f3461517a646..32576423a62a1a12f085d565e7ff267145bf979c 100644
--- a/paddle/fluid/framework/operator.cc
+++ b/paddle/fluid/framework/operator.cc
@@ -171,17 +171,6 @@ std::string OperatorBase::DebugStringEx(const Scope* scope) const {
return ss.str();
}
-void OperatorBase::Rename(const std::string& old_name,
- const std::string& new_name) {
- for (auto& input : inputs_) {
- std::replace(input.second.begin(), input.second.end(), old_name, new_name);
- }
- for (auto& output : outputs_) {
- std::replace(output.second.begin(), output.second.end(), old_name,
- new_name);
- }
-}
-
OperatorBase::OperatorBase(const std::string& type,
const VariableNameMap& inputs,
const VariableNameMap& outputs,
@@ -327,7 +316,6 @@ bool OpSupportGPU(const std::string& op_type) {
auto it = all_kernels.find(op_type);
if (it == all_kernels.end()) {
// All control operator must support GPU
-
return true;
}
for (auto& kern_pair : it->second) {
@@ -554,7 +542,7 @@ void OperatorWithKernel::RunImpl(const Scope& scope,
std::shared_ptr out(new Tensor);
DataTransform(expected_kernel_key, kernel_type_for_var, *tensor_in,
out.get());
- CopyVariableWithTensor(*var, *(out.get()), *trans_var);
+ CopyVariableWithTensor(*var, *(out.get()), trans_var);
}
}
}
diff --git a/paddle/fluid/framework/operator.h b/paddle/fluid/framework/operator.h
index b7a7c69b4c8493f945926c75797c49d327a3197e..826cc57b725ab4b52e5d67ab82e939cbd62a8460 100644
--- a/paddle/fluid/framework/operator.h
+++ b/paddle/fluid/framework/operator.h
@@ -79,31 +79,28 @@ class OperatorBase {
virtual ~OperatorBase() {}
- template
- inline const T& Attr(const std::string& name) const {
- PADDLE_ENFORCE(attrs_.count(name) != 0, "%s should be in AttributeMap",
- name);
- return boost::get(attrs_.at(name));
- }
-
- /// if scope is not null, also show dimensions of arguments
- virtual std::string DebugStringEx(const Scope* scope) const;
-
- std::string DebugString() const { return DebugStringEx(nullptr); }
-
- /// Net will call this interface function to Run an op.
+ /// Executor will call this interface function to Run an op.
// The implementation should be written at RunImpl
void Run(const Scope& scope, const platform::Place& place);
// FIXME(typhoonzero): this is only used for recv_op to stop event_loop.
virtual void Stop() {}
- virtual bool IsNetOp() const { return false; }
+ /// if scope is not null, also show dimensions of arguments
+ virtual std::string DebugStringEx(const Scope* scope) const;
+ std::string DebugString() const { return DebugStringEx(nullptr); }
virtual bool SupportGPU() const { return false; }
- /// rename inputs outputs name
- void Rename(const std::string& old_name, const std::string& new_name);
+ const std::string& Type() const { return type_; }
+
+ template
+ inline const T& Attr(const std::string& name) const {
+ PADDLE_ENFORCE(attrs_.count(name) != 0, "%s should be in AttributeMap",
+ name);
+ return boost::get(attrs_.at(name));
+ }
+ const AttributeMap& Attrs() const { return attrs_; }
const VariableNameMap& Inputs() const { return inputs_; }
const VariableNameMap& Outputs() const { return outputs_; }
@@ -112,7 +109,7 @@ class OperatorBase {
std::string Input(const std::string& name) const;
//! Get a input which has multiple variables.
const std::vector& Inputs(const std::string& name) const;
-
+ //! Get all inputs variable names
std::vector InputVars() const;
//! Get a output with argument's name described in `op_proto`
@@ -120,13 +117,9 @@ class OperatorBase {
//! Get an output which has multiple variables.
//! TODO add a vector_view to prevent memory copy.
const std::vector& Outputs(const std::string& name) const;
-
+ //! Get all outputs variable names
virtual std::vector OutputVars(bool has_intermediate) const;
- const std::string& Type() const { return type_; }
- void SetType(const std::string& type) { type_ = type; }
- const AttributeMap& Attrs() const { return attrs_; }
-
// Return a new operator instance, which is as same as this.
// Use unique_ptr to prevent caller forget to delete this pointer.
virtual std::unique_ptr Clone() const = 0;
@@ -278,20 +271,6 @@ class ExecutionContext {
return res;
}
- void ShareLoD(const std::string& in, const std::string& out, size_t i = 0,
- size_t j = 0) const {
- PADDLE_ENFORCE_LT(i, InputSize(in));
- PADDLE_ENFORCE_LT(j, OutputSize(out));
- auto* in_var = MultiInputVar(in)[i];
- auto* out_var = MultiOutputVar(out)[j];
- if (!in_var->IsType()) return;
- PADDLE_ENFORCE(out_var->IsType(),
- "The %d-th output of Output(%s) must be LoDTensor.", j, out);
- auto in_tensor = in_var->Get();
- auto* out_tensor = out_var->GetMutable();
- out_tensor->set_lod(in_tensor.lod());
- }
-
platform::Place GetPlace() const { return device_context_.GetPlace(); }
template
diff --git a/paddle/fluid/framework/program_desc.cc b/paddle/fluid/framework/program_desc.cc
index 16694bcf76486a9603c41dc19a58dd0a7cb2b719..64fb028f83a539d17885186d5d8ee6ef26f095e9 100644
--- a/paddle/fluid/framework/program_desc.cc
+++ b/paddle/fluid/framework/program_desc.cc
@@ -56,7 +56,7 @@ ProgramDesc::ProgramDesc(const ProgramDesc &o) {
for (const auto &attr : op->Proto()->attrs()) {
if (attr.type() == proto::AttrType::BLOCK) {
size_t blk_idx = attr.block_idx();
- op->SetBlockAttr(attr.name(), *this->MutableBlock(blk_idx));
+ op->SetBlockAttr(attr.name(), this->MutableBlock(blk_idx));
}
}
}
@@ -73,7 +73,7 @@ ProgramDesc::ProgramDesc(const proto::ProgramDesc &desc) {
for (const auto &attr : op->Proto()->attrs()) {
if (attr.type() == proto::AttrType::BLOCK) {
size_t blk_idx = attr.block_idx();
- op->SetBlockAttr(attr.name(), *this->MutableBlock(blk_idx));
+ op->SetBlockAttr(attr.name(), this->MutableBlock(blk_idx));
}
}
}
diff --git a/paddle/fluid/framework/prune.cc b/paddle/fluid/framework/prune.cc
index 107c5bf8ecbc3b46dd5fae87c73d0be4f74d1587..57c1b822d8d4f095f33cba2bfd5210f7ee19dd9f 100644
--- a/paddle/fluid/framework/prune.cc
+++ b/paddle/fluid/framework/prune.cc
@@ -14,19 +14,19 @@ limitations under the License. */
#include "paddle/fluid/framework/prune.h"
+#include
+
#include
#include
#include
#include
#include
-#include
-
namespace paddle {
namespace framework {
-const std::string kFeedOpType = "feed";
-const std::string kFetchOpType = "fetch";
+const char kFeedOpType[] = "feed";
+const char kFetchOpType[] = "fetch";
bool HasDependentVar(const proto::OpDesc& op_desc,
const std::set& dependent_vars) {
@@ -68,7 +68,7 @@ bool HasSubBlock(const proto::OpDesc& op_desc) {
// the child block to help pruning
void prune_impl(const proto::ProgramDesc& input, proto::ProgramDesc* output,
int block_id, int parent_block_id,
- std::set& dependent_vars) {
+ std::set* dependent_vars) {
auto& block = input.blocks(block_id);
auto& ops = block.ops();
@@ -90,11 +90,11 @@ void prune_impl(const proto::ProgramDesc& input, proto::ProgramDesc* output,
std::vector should_run;
for (auto op_iter = ops.rbegin(); op_iter != ops.rend(); ++op_iter) {
auto& op_desc = *op_iter;
- if (IsTarget(op_desc) || HasDependentVar(op_desc, dependent_vars)) {
+ if (IsTarget(op_desc) || HasDependentVar(op_desc, *dependent_vars)) {
// insert its input to the dependency graph
for (auto& var : op_desc.inputs()) {
for (auto& argu : var.arguments()) {
- dependent_vars.insert(argu);
+ dependent_vars->insert(argu);
}
}
should_run.push_back(true);
@@ -138,7 +138,7 @@ void prune_impl(const proto::ProgramDesc& input, proto::ProgramDesc* output,
// GetSubBlockIndex(*op) is the idx of the sub_block in the input desc
// output_block_id is the idx of the current block in the output desc
prune_impl(input, output, GetSubBlockIndex(*op), output_block_id,
- sub_block_dependent_vars);
+ &sub_block_dependent_vars);
}
}
}
@@ -181,7 +181,7 @@ void prune_impl(const proto::ProgramDesc& input, proto::ProgramDesc* output,
void Prune(const proto::ProgramDesc& input, proto::ProgramDesc* output) {
std::set dependent_vars;
output->clear_blocks();
- prune_impl(input, output, 0, -1, dependent_vars);
+ prune_impl(input, output, 0, -1, &dependent_vars);
}
void inference_optimize_impl(proto::ProgramDesc* input, int block_id) {
diff --git a/paddle/fluid/inference/engine.h b/paddle/fluid/inference/engine.h
new file mode 100644
index 0000000000000000000000000000000000000000..0633c052e4dc61d495b9acacb9a7ab6c941a267a
--- /dev/null
+++ b/paddle/fluid/inference/engine.h
@@ -0,0 +1,53 @@
+/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License. */
+
+#pragma once
+
+#include "paddle/fluid/framework/framework.pb.h"
+
+namespace paddle {
+namespace inference {
+
+/*
+ * EngineBase is the base class of all inference engines. An inference engine
+ * takes a paddle program as input, and outputs the result in fluid Tensor
+ * format. It can be used to optimize performance of computation sub-blocks, for
+ * example, break down the original block into sub-blocks and execute each
+ * sub-blocks in different engines.
+ *
+ * For example:
+ * When inference, the resnet50 model can put most of the model into subgraph
+ * and run it on a TensorRT engine.
+ *
+ * There are several engines such as TensorRT and other frameworks, so an
+ * EngineBase is put forward to give an unified interface for all the
+ * different engine implemention.
+ */
+class EngineBase {
+ public:
+ using DescType = ::paddle::framework::proto::BlockDesc;
+
+ // Build the model and do some preparation, for example, in TensorRT, run
+ // createInferBuilder, buildCudaEngine.
+ virtual void Build(const DescType& paddle_model) = 0;
+
+ // Execute the engine, that will run the inference network.
+ virtual void Execute(int batch_size) = 0;
+
+ virtual ~EngineBase() {}
+
+}; // class EngineBase
+
+} // namespace inference
+} // namespace paddle
diff --git a/paddle/fluid/inference/tensorrt/CMakeLists.txt b/paddle/fluid/inference/tensorrt/CMakeLists.txt
index 37f038f1fb4eda643e69449e2b374d200c34937a..ad850055a52601448c15d5a6b2a6599771971ce0 100644
--- a/paddle/fluid/inference/tensorrt/CMakeLists.txt
+++ b/paddle/fluid/inference/tensorrt/CMakeLists.txt
@@ -1,3 +1,4 @@
nv_test(test_tensorrt SRCS test_tensorrt.cc DEPS dynload_cuda device_context dynamic_loader)
+nv_test(test_tensorrt_engine SRCS test_engine.cc engine.cc DEPS dynload_cuda)
cc_library(tensorrt DEPS tensorrt_convert)
add_subdirectory(convert)
diff --git a/paddle/fluid/inference/tensorrt/engine.cc b/paddle/fluid/inference/tensorrt/engine.cc
new file mode 100644
index 0000000000000000000000000000000000000000..276502e4999df3aecf246fcb9591c59ea9d3f02b
--- /dev/null
+++ b/paddle/fluid/inference/tensorrt/engine.cc
@@ -0,0 +1,134 @@
+/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License. */
+
+#include "paddle/fluid/inference/tensorrt/engine.h"
+
+#include
+#include
+#include
+#include "paddle/fluid/inference/tensorrt/helper.h"
+#include "paddle/fluid/platform/enforce.h"
+
+namespace paddle {
+namespace inference {
+namespace tensorrt {
+
+void TensorRTEngine::Build(const DescType& paddle_model) {
+ PADDLE_ENFORCE(false, "not implemented");
+}
+
+void TensorRTEngine::Execute(int batch_size) {
+ infer_context_->enqueue(batch_size, buffers_.data(), *stream_, nullptr);
+ cudaStreamSynchronize(*stream_);
+}
+
+TensorRTEngine::~TensorRTEngine() {
+ // clean buffer
+ for (auto& buffer : buffers_) {
+ if (buffer != nullptr) {
+ PADDLE_ENFORCE_EQ(0, cudaFree(buffer));
+ buffer = nullptr;
+ }
+ }
+}
+
+void TensorRTEngine::FreezeNetwork() {
+ PADDLE_ENFORCE(infer_builder_ != nullptr,
+ "Call InitNetwork first to initialize network.");
+ PADDLE_ENFORCE(infer_network_ != nullptr,
+ "Call InitNetwork first to initialize network.");
+ // build engine.
+ infer_builder_->setMaxBatchSize(max_batch_);
+ infer_builder_->setMaxWorkspaceSize(max_workspace_);
+
+ infer_engine_.reset(infer_builder_->buildCudaEngine(*infer_network_));
+ PADDLE_ENFORCE(infer_engine_ != nullptr, "build cuda engine failed!");
+
+ infer_context_.reset(infer_engine_->createExecutionContext());
+
+ // allocate GPU buffers.
+ buffers_.resize(buffer_sizes_.size(), nullptr);
+ for (auto& item : buffer_sizes_) {
+ if (item.second == 0) {
+ auto slot_offset = infer_engine_->getBindingIndex(item.first.c_str());
+ item.second = kDataTypeSize[static_cast(
+ infer_engine_->getBindingDataType(slot_offset))] *
+ AccumDims(infer_engine_->getBindingDimensions(slot_offset));
+ }
+ PADDLE_ENFORCE_EQ(0, cudaMalloc(&buffer(item.first), item.second));
+ }
+}
+
+nvinfer1::ITensor* TensorRTEngine::DeclareInput(const std::string& name,
+ nvinfer1::DataType dtype,
+ const nvinfer1::Dims& dim) {
+ PADDLE_ENFORCE_EQ(0, buffer_sizes_.count(name), "duplicate input name %s",
+ name);
+
+ PADDLE_ENFORCE(infer_network_ != nullptr, "should initnetwork first");
+ auto* input = infer_network_->addInput(name.c_str(), dtype, dim);
+ PADDLE_ENFORCE(input, "infer network add input %s failed", name);
+
+ buffer_sizes_[name] = kDataTypeSize[static_cast(dtype)] * AccumDims(dim);
+ return input;
+}
+
+void TensorRTEngine::DeclareOutput(const nvinfer1::ILayer* layer, int offset,
+ const std::string& name) {
+ PADDLE_ENFORCE_EQ(0, buffer_sizes_.count(name), "duplicate output name %s",
+ name);
+
+ auto* output = layer->getOutput(offset);
+ PADDLE_ENFORCE(output != nullptr);
+ output->setName(name.c_str());
+ infer_network_->markOutput(*output);
+ // output buffers' size can only be decided latter, set zero here to mark this
+ // and will reset latter.
+ buffer_sizes_[name] = 0;
+}
+
+void* TensorRTEngine::GetOutputInGPU(const std::string& name) {
+ return buffer(name);
+}
+
+void TensorRTEngine::GetOutputInCPU(const std::string& name, void* dst,
+ size_t max_size) {
+ // determine data size
+ auto it = buffer_sizes_.find(name);
+ PADDLE_ENFORCE(it != buffer_sizes_.end());
+ PADDLE_ENFORCE_GT(it->second, 0);
+ PADDLE_ENFORCE_GE(max_size, it->second);
+
+ PADDLE_ENFORCE_EQ(0, cudaMemcpyAsync(dst, buffer(name), it->second,
+ cudaMemcpyDeviceToHost, *stream_));
+}
+
+void*& TensorRTEngine::buffer(const std::string& name) {
+ PADDLE_ENFORCE(infer_engine_ != nullptr, "call FreezeNetwork first.");
+ auto it = buffer_sizes_.find(name);
+ PADDLE_ENFORCE(it != buffer_sizes_.end());
+ auto slot_offset = infer_engine_->getBindingIndex(name.c_str());
+ return buffers_[slot_offset];
+}
+
+void TensorRTEngine::SetInputFromCPU(const std::string& name, void* data,
+ size_t size) {
+ void* buf = buffer(name);
+ PADDLE_ENFORCE_EQ(
+ 0, cudaMemcpyAsync(buf, data, size, cudaMemcpyHostToDevice, *stream_));
+}
+
+} // namespace tensorrt
+} // namespace inference
+} // namespace paddle
diff --git a/paddle/fluid/inference/tensorrt/engine.h b/paddle/fluid/inference/tensorrt/engine.h
new file mode 100644
index 0000000000000000000000000000000000000000..ff853455b8bf0c9db36353f3cbe6be0ed4948fb3
--- /dev/null
+++ b/paddle/fluid/inference/tensorrt/engine.h
@@ -0,0 +1,144 @@
+/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License. */
+
+#pragma once
+
+#include
+#include
+#include
+#include "paddle/fluid/inference/engine.h"
+#include "paddle/fluid/inference/tensorrt/helper.h"
+
+namespace paddle {
+namespace inference {
+namespace tensorrt {
+
+/*
+ * TensorRT Engine.
+ *
+ * There are two alternative ways to use it, one is to build from a paddle
+ * protobuf model, another way is to manully construct the network.
+ */
+class TensorRTEngine : public EngineBase {
+ public:
+ // Weight is model parameter.
+ class Weight {
+ public:
+ Weight(nvinfer1::DataType dtype, void* value, int num_elem) {
+ w_.type = dtype;
+ w_.values = value;
+ w_.count = num_elem;
+ }
+ const nvinfer1::Weights& get() { return w_; }
+
+ private:
+ nvinfer1::Weights w_;
+ };
+
+ TensorRTEngine(int max_batch, int max_workspace, cudaStream_t* stream,
+ nvinfer1::ILogger& logger = NaiveLogger::Global())
+ : max_batch_(max_batch),
+ max_workspace_(max_workspace),
+ stream_(stream),
+ logger_(logger) {}
+
+ virtual ~TensorRTEngine();
+
+ // TODO(Superjomn) implement it later when graph segmentation is supported.
+ virtual void Build(const DescType& paddle_model) override;
+
+ virtual void Execute(int batch_size) override;
+
+ // Initialize the inference network, so that TensorRT layers can add to this
+ // network.
+ void InitNetwork() {
+ infer_builder_.reset(createInferBuilder(logger_));
+ infer_network_.reset(infer_builder_->createNetwork());
+ }
+ // After finishing adding ops, freeze this network and creates the executation
+ // environment.
+ void FreezeNetwork();
+
+ // Add an input and set its name, data type and dimention.
+ nvinfer1::ITensor* DeclareInput(const std::string& name,
+ nvinfer1::DataType dtype,
+ const nvinfer1::Dims& dim);
+ // Set the offset-th output from a layer as the network's output, and set its
+ // name.
+ void DeclareOutput(const nvinfer1::ILayer* layer, int offset,
+ const std::string& name);
+
+ // GPU memory address for an ITensor with specific name. One can operate on
+ // these memory directly for acceleration, for example, output the converted
+ // data directly to the buffer to save data copy overhead.
+ // NOTE this should be used after calling `FreezeNetwork`.
+ void*& buffer(const std::string& name);
+
+ // Fill an input from CPU memory with name and size.
+ void SetInputFromCPU(const std::string& name, void* data, size_t size);
+ // TODO(Superjomn) is this method necessary given that buffer(xxx) can be
+ // accessed directly. Fill an input from GPU memory with name and size.
+ void SetInputFromGPU(const std::string& name, void* data, size_t size);
+ // Get an output called name, the output of tensorrt is in GPU, so this method
+ // will just return the output's GPU memory address.
+ void* GetOutputInGPU(const std::string& name);
+ // LOW EFFICENCY! Get output to CPU, this will trigger a memory copy from GPU
+ // to CPU.
+ void GetOutputInCPU(const std::string& name, void* dst, size_t max_size);
+
+ nvinfer1::ICudaEngine* engine() { return infer_engine_.get(); }
+ nvinfer1::INetworkDefinition* network() { return infer_network_.get(); }
+
+ private:
+ // the max batch size
+ int max_batch_;
+ // the max memory size the engine uses
+ int max_workspace_;
+ cudaStream_t* stream_;
+ nvinfer1::ILogger& logger_;
+
+ std::vector buffers_;
+ // max data size for the buffers.
+ std::unordered_map buffer_sizes_;
+
+ // TensorRT related internal members
+ template
+ struct Destroyer {
+ void operator()(T* x) { x->destroy(); }
+ };
+ template
+ using infer_ptr = std::unique_ptr>;
+ infer_ptr infer_builder_;
+ infer_ptr infer_network_;
+ infer_ptr infer_engine_;
+ infer_ptr infer_context_;
+}; // class TensorRTEngine
+
+// Add an layer__ into engine__ with args ARGS.
+// For example:
+// TRT_ENGINE_ADD_LAYER(xxx, FullyConnected, input, dim, weights, bias)
+//
+// Reference
+// https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#charRNN_define_network
+//
+// will add a fully connected layer into the engine.
+// TensorRT has too many layers, so that is not wise to add member functions for
+// them, and an macro like this is more extensible when underlying TensorRT
+// library add new layer supports.
+#define TRT_ENGINE_ADD_LAYER(engine__, layer__, ARGS...) \
+ engine__->network()->add##layer__(ARGS);
+
+} // namespace tensorrt
+} // namespace inference
+} // namespace paddle
diff --git a/paddle/fluid/inference/tensorrt/helper.h b/paddle/fluid/inference/tensorrt/helper.h
new file mode 100644
index 0000000000000000000000000000000000000000..796283d325ceb84c733eff5c119b808300bca069
--- /dev/null
+++ b/paddle/fluid/inference/tensorrt/helper.h
@@ -0,0 +1,88 @@
+/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#pragma once
+
+#include
+#include
+#include
+#include "paddle/fluid/platform/dynload/tensorrt.h"
+#include "paddle/fluid/platform/enforce.h"
+
+namespace paddle {
+namespace inference {
+namespace tensorrt {
+
+namespace dy = paddle::platform::dynload;
+
+static size_t AccumDims(nvinfer1::Dims dims) {
+ size_t num = dims.nbDims == 0 ? 0 : 1;
+ for (int i = 0; i < dims.nbDims; i++) {
+ PADDLE_ENFORCE_GT(dims.d[i], 0);
+ num *= dims.d[i];
+ }
+ return num;
+}
+
+// TensorRT data type to size
+const int kDataTypeSize[] = {
+ 4, // kFLOAT
+ 2, // kHALF
+ 1, // kINT8
+ 4 // kINT32
+};
+
+// The following two API are implemented in TensorRT's header file, cannot load
+// from the dynamic library. So create our own implementation and directly
+// trigger the method from the dynamic library.
+static nvinfer1::IBuilder* createInferBuilder(nvinfer1::ILogger& logger) {
+ return static_cast(
+ dy::createInferBuilder_INTERNAL(&logger, NV_TENSORRT_VERSION));
+}
+static nvinfer1::IRuntime* createInferRuntime(nvinfer1::ILogger& logger) {
+ return static_cast(
+ dy::createInferRuntime_INTERNAL(&logger, NV_TENSORRT_VERSION));
+}
+
+// A logger for create TensorRT infer builder.
+class NaiveLogger : public nvinfer1::ILogger {
+ public:
+ void log(nvinfer1::ILogger::Severity severity, const char* msg) override {
+ switch (severity) {
+ case Severity::kINFO:
+ LOG(INFO) << msg;
+ break;
+ case Severity::kWARNING:
+ LOG(WARNING) << msg;
+ break;
+ case Severity::kINTERNAL_ERROR:
+ case Severity::kERROR:
+ LOG(ERROR) << msg;
+ break;
+ default:
+ break;
+ }
+ }
+
+ static nvinfer1::ILogger& Global() {
+ static nvinfer1::ILogger* x = new NaiveLogger;
+ return *x;
+ }
+
+ virtual ~NaiveLogger() override {}
+};
+
+} // namespace tensorrt
+} // namespace inference
+} // namespace paddle
diff --git a/paddle/fluid/inference/tensorrt/test_engine.cc b/paddle/fluid/inference/tensorrt/test_engine.cc
new file mode 100644
index 0000000000000000000000000000000000000000..f3dbdf11f217ca35bc541b9bdb95bf0c06377fd3
--- /dev/null
+++ b/paddle/fluid/inference/tensorrt/test_engine.cc
@@ -0,0 +1,83 @@
+/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License. */
+
+#include "paddle/fluid/inference/tensorrt/engine.h"
+
+#include
+#include
+#include
+#include
+
+#include "paddle/fluid/platform/enforce.h"
+
+namespace paddle {
+namespace inference {
+namespace tensorrt {
+
+class TensorRTEngineTest : public ::testing::Test {
+ protected:
+ void SetUp() override {
+ ASSERT_EQ(0, cudaStreamCreate(&stream_));
+ engine_ = new TensorRTEngine(1, 1 << 10, &stream_);
+ engine_->InitNetwork();
+ }
+
+ void TearDown() override {
+ delete engine_;
+ cudaStreamDestroy(stream_);
+ }
+
+ protected:
+ TensorRTEngine* engine_;
+ cudaStream_t stream_;
+};
+
+TEST_F(TensorRTEngineTest, add_layer) {
+ const int size = 1;
+
+ float raw_weight[size] = {2.}; // Weight in CPU memory.
+ float raw_bias[size] = {3.};
+
+ LOG(INFO) << "create weights";
+ TensorRTEngine::Weight weight(nvinfer1::DataType::kFLOAT, raw_weight, size);
+ TensorRTEngine::Weight bias(nvinfer1::DataType::kFLOAT, raw_bias, size);
+ auto* x = engine_->DeclareInput("x", nvinfer1::DataType::kFLOAT,
+ nvinfer1::DimsCHW{1, 1, 1});
+ auto* fc_layer = TRT_ENGINE_ADD_LAYER(engine_, FullyConnected, *x, size,
+ weight.get(), bias.get());
+ PADDLE_ENFORCE(fc_layer != nullptr);
+
+ engine_->DeclareOutput(fc_layer, 0, "y");
+ LOG(INFO) << "freeze network";
+ engine_->FreezeNetwork();
+ ASSERT_EQ(engine_->engine()->getNbBindings(), 2);
+
+ // fill in real data
+ float x_v = 1234;
+ engine_->SetInputFromCPU("x", (void*)&x_v, 1 * sizeof(float));
+ LOG(INFO) << "to execute";
+ engine_->Execute(1);
+
+ LOG(INFO) << "to get output";
+ // void* y_v =
+ float y_cpu;
+ engine_->GetOutputInCPU("y", &y_cpu, sizeof(float));
+
+ LOG(INFO) << "to checkout output";
+ ASSERT_EQ(y_cpu, x_v * 2 + 3);
+}
+
+} // namespace tensorrt
+} // namespace inference
+} // namespace paddle
diff --git a/paddle/fluid/inference/tensorrt/test_tensorrt.cc b/paddle/fluid/inference/tensorrt/test_tensorrt.cc
index a81a708e7a79225fd52c4b8e081afdcd8fe7e9ad..aed5b5e1a22cbed1256d4f28d0a8a4c29c6cc744 100644
--- a/paddle/fluid/inference/tensorrt/test_tensorrt.cc
+++ b/paddle/fluid/inference/tensorrt/test_tensorrt.cc
@@ -1,16 +1,16 @@
/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
+http://www.apache.org/licenses/LICENSE-2.0
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License. */
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License. */
#include
#include
diff --git a/paddle/fluid/inference/tests/test_helper.h b/paddle/fluid/inference/tests/test_helper.h
index 117472599f7c4874ab05e29c6ecb46fd61d0db9c..af2a7a5620487a10c1df6152fc4e4bf67b150752 100644
--- a/paddle/fluid/inference/tests/test_helper.h
+++ b/paddle/fluid/inference/tests/test_helper.h
@@ -178,10 +178,10 @@ void TestInference(const std::string& dirname,
std::unique_ptr ctx;
if (PrepareContext) {
ctx = executor.Prepare(*inference_program, 0);
- executor.RunPreparedContext(ctx.get(), scope, feed_targets, fetch_targets,
- CreateVars);
+ executor.RunPreparedContext(ctx.get(), scope, &feed_targets,
+ &fetch_targets, CreateVars);
} else {
- executor.Run(*inference_program, scope, feed_targets, fetch_targets,
+ executor.Run(*inference_program, scope, &feed_targets, &fetch_targets,
CreateVars);
}
@@ -197,10 +197,10 @@ void TestInference(const std::string& dirname,
if (PrepareContext) {
// Note: if you change the inference_program, you need to call
// executor.Prepare() again to get a new ExecutorPrepareContext.
- executor.RunPreparedContext(ctx.get(), scope, feed_targets,
- fetch_targets, CreateVars);
+ executor.RunPreparedContext(ctx.get(), scope, &feed_targets,
+ &fetch_targets, CreateVars);
} else {
- executor.Run(*inference_program, scope, feed_targets, fetch_targets,
+ executor.Run(*inference_program, scope, &feed_targets, &fetch_targets,
CreateVars);
}
}
diff --git a/paddle/fluid/operators/beam_search_decode_op.h b/paddle/fluid/operators/beam_search_decode_op.h
index 4cb0457d9285e20d4b6a2f9987b7fdb1c6ac157f..3c01f81c83555b985bb6b7a9e3330ab594a62863 100644
--- a/paddle/fluid/operators/beam_search_decode_op.h
+++ b/paddle/fluid/operators/beam_search_decode_op.h
@@ -223,8 +223,9 @@ void BeamSearchDecoder::ConvertSentenceVectorToLodTensor(
sentence_vector_list[src_idx].size());
}
- auto cpu_place = new paddle::platform::CPUPlace();
- paddle::platform::CPUDeviceContext cpu_ctx(*cpu_place);
+ auto cpu_place = std::unique_ptr(
+ new paddle::platform::CPUPlace());
+ paddle::platform::CPUDeviceContext cpu_ctx(*cpu_place.get());
framework::LoD lod;
lod.push_back(source_level_lod);
diff --git a/paddle/fluid/operators/bilinear_interp_op.cc b/paddle/fluid/operators/bilinear_interp_op.cc
new file mode 100644
index 0000000000000000000000000000000000000000..69f79bf93be8ac7df9cab43b84cf755f2f3dfeaa
--- /dev/null
+++ b/paddle/fluid/operators/bilinear_interp_op.cc
@@ -0,0 +1,94 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#include "paddle/fluid/operators/bilinear_interp_op.h"
+#include
+#include "paddle/fluid/framework/op_registry.h"
+
+namespace paddle {
+namespace operators {
+
+using framework::Tensor;
+
+class BilinearInterpOp : public framework::OperatorWithKernel {
+ public:
+ using framework::OperatorWithKernel::OperatorWithKernel;
+
+ protected:
+ void InferShape(framework::InferShapeContext* ctx) const override {
+ PADDLE_ENFORCE(ctx->HasInput("X"),
+ "Input(X) of BilinearInterOp should not be null.");
+ PADDLE_ENFORCE(ctx->HasOutput("Out"),
+ "Output(Out) of BilinearInterOp should not be null.");
+
+ auto dim_x = ctx->GetInputDim("X"); // NCHW format
+ int out_h = ctx->Attrs().Get("out_h");
+ int out_w = ctx->Attrs().Get("out_w");
+ PADDLE_ENFORCE_EQ(dim_x.size(), 4, "X's dimension must be 4");
+
+ std::vector dim_out({dim_x[0], dim_x[1], out_h, out_w});
+ ctx->SetOutputDim("Out", framework::make_ddim(dim_out));
+ }
+};
+
+class BilinearInterpOpMaker : public framework::OpProtoAndCheckerMaker {
+ public:
+ BilinearInterpOpMaker(OpProto* proto, OpAttrChecker* op_checker)
+ : OpProtoAndCheckerMaker(proto, op_checker) {
+ AddInput("X",
+ "(Tensor) The input tensor of bilinear interpolation, "
+ "This is a 4-D tensor with shape of (N x C x h x w)");
+ AddOutput("Out",
+ "(Tensor) The dimension of output is (N x C x out_h x out_w]");
+
+ AddAttr("out_h", "(int) output height of bilinear interpolation op.");
+ AddAttr("out_w", "(int) output width of bilinear interpolation op.");
+ AddComment(R"DOC(
+ Bilinear interpolation is an extension of linear interpolation for
+ interpolating functions of two variables (e.g. H-direction and
+ W-direction in this op) on a rectilinear 2D grid.
+
+ The key idea is to perform linear interpolation first in one
+ direction, and then again in the other direction.
+
+ For details, please refer to Wikipedia:
+ https://en.wikipedia.org/wiki/Bilinear_interpolation
+ )DOC");
+ }
+};
+
+class BilinearInterpOpGrad : public framework::OperatorWithKernel {
+ public:
+ using framework::OperatorWithKernel::OperatorWithKernel;
+
+ protected:
+ void InferShape(framework::InferShapeContext* ctx) const override {
+ PADDLE_ENFORCE(ctx->HasInput("X"), "Input(X) should not be null");
+ PADDLE_ENFORCE(ctx->HasInput(framework::GradVarName("Out")),
+ "Input(Out@GRAD) should not be null");
+ auto dim_x = ctx->GetInputDim("X");
+ if (ctx->HasOutput(framework::GradVarName("X"))) {
+ ctx->SetOutputDim(framework::GradVarName("X"), dim_x);
+ }
+ }
+};
+
+} // namespace operators
+} // namespace paddle
+
+namespace ops = paddle::operators;
+REGISTER_OPERATOR(bilinear_interp, ops::BilinearInterpOp,
+ ops::BilinearInterpOpMaker,
+ paddle::framework::DefaultGradOpDescMaker);
+REGISTER_OPERATOR(bilinear_interp_grad, ops::BilinearInterpOpGrad);
+REGISTER_OP_CPU_KERNEL(bilinear_interp, ops::BilinearInterpKernel);
+REGISTER_OP_CPU_KERNEL(bilinear_interp_grad,
+ ops::BilinearInterpGradKernel);
diff --git a/paddle/fluid/operators/bilinear_interp_op.cu b/paddle/fluid/operators/bilinear_interp_op.cu
new file mode 100644
index 0000000000000000000000000000000000000000..82eb9e83bd84e6ec6881facbb2fac0aebce93d55
--- /dev/null
+++ b/paddle/fluid/operators/bilinear_interp_op.cu
@@ -0,0 +1,186 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#include "paddle/fluid/operators/bilinear_interp_op.h"
+#include "paddle/fluid/platform/cuda_helper.h"
+
+namespace paddle {
+namespace operators {
+
+using framework::Tensor;
+
+template
+__global__ void KeBilinearInterpFw(
+ const T* in, const size_t in_img_h, const size_t in_img_w,
+ const size_t input_h, const size_t input_w, T* out, const size_t out_img_h,
+ const size_t out_img_w, const size_t output_h, const size_t output_w,
+ const size_t num_channels, const T ratio_h, const T ratioW) {
+ int nthreads = output_h * output_w;
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
+ if (tid < nthreads) {
+ int out_id_h = tid / output_w;
+ int out_id_w = tid % output_w;
+ int in_img_size = input_w / num_channels;
+ int out_img_size = output_w / num_channels;
+ int channel_id = out_id_w / out_img_size;
+
+ int out_img_idy = (out_id_w % out_img_size) / out_img_w;
+ int in_img_idy = ratio_h * out_img_idy;
+ int h_id = (in_img_idy < in_img_h - 1) ? 1 : 0;
+ T h1lambda = ratio_h * out_img_idy - in_img_idy;
+ T h2lambda = 1.f - h1lambda;
+
+ int out_img_idx = tid % out_img_w;
+ int in_img_idx = ratioW * out_img_idx;
+ int w_id = (in_img_idx < in_img_w - 1) ? 1 : 0;
+ T w1lambda = ratioW * out_img_idx - in_img_idx;
+ T w2lambda = 1.f - w1lambda;
+
+ const T* in_pos = &in[out_id_h * input_w + channel_id * in_img_size +
+ in_img_idy * in_img_w + in_img_idx];
+
+ // bilinear interpolation
+ out[out_id_h * output_w + out_id_w] =
+ h2lambda * (w2lambda * in_pos[0] + w1lambda * in_pos[w_id]) +
+ h1lambda * (w2lambda * in_pos[h_id * in_img_w] +
+ w1lambda * in_pos[h_id * in_img_w + w_id]);
+ }
+}
+
+template
+__global__ void KeBilinearInterpBw(
+ T* in, const size_t in_img_h, const size_t in_img_w, const size_t input_h,
+ const size_t input_w, const T* out, const size_t out_img_h,
+ const size_t out_img_w, const size_t output_h, const size_t output_w,
+ const size_t num_channels, const T ratio_h, const T ratioW) {
+ int nthreads = output_h * output_w;
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
+ if (tid < nthreads) {
+ int out_id_h = tid / output_w;
+ int out_id_w = tid % output_w;
+ int in_img_size = input_w / num_channels;
+ int out_img_size = output_w / num_channels;
+ int channel_id = out_id_w / out_img_size;
+
+ int out_img_idy = (out_id_w % out_img_size) / out_img_w;
+ int in_img_idy = ratio_h * out_img_idy;
+ int h_id = (in_img_idy < in_img_h - 1) ? 1 : 0;
+ T h1lambda = ratio_h * out_img_idy - in_img_idy;
+ T h2lambda = 1.f - h1lambda;
+
+ int out_img_idx = tid % out_img_w;
+ int in_img_idx = ratioW * out_img_idx;
+ int w_id = (in_img_idx < in_img_w - 1) ? 1 : 0;
+ T w1lambda = ratioW * out_img_idx - in_img_idx;
+ T w2lambda = 1.f - w1lambda;
+
+ T* in_pos = &in[out_id_h * input_w + channel_id * in_img_size +
+ in_img_idy * in_img_w + in_img_idx];
+ const T* out_pos = &out[out_id_h * output_w + out_id_w];
+ atomicAdd(&in_pos[0], h2lambda * w2lambda * out_pos[0]);
+ atomicAdd(&in_pos[w_id], h2lambda * w1lambda * out_pos[0]);
+ atomicAdd(&in_pos[h_id * in_img_w], h1lambda * w2lambda * out_pos[0]);
+ atomicAdd(&in_pos[h_id * in_img_w + w_id],
+ h1lambda * w1lambda * out_pos[0]);
+ }
+}
+
+template
+class BilinearInterpOpCUDAKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const override {
+ PADDLE_ENFORCE(platform::is_gpu_place(ctx.GetPlace()),
+ "This kernel only runs on GPU device.");
+ auto* input_t = ctx.Input("X"); // float tensor
+ auto* output_t = ctx.Output("Out"); // float tensor
+ auto* input = input_t->data();
+ auto* output = output_t->mutable_data(ctx.GetPlace());
+
+ int out_h = ctx.Attr("out_h");
+ int out_w = ctx.Attr("out_w");
+ int batch_size = input_t->dims()[0];
+ int channels = input_t->dims()[1];
+ int in_h = input_t->dims()[2];
+ int in_w = input_t->dims()[3];
+
+ int in_hw = in_h * in_w;
+ int out_hw = out_h * out_w;
+ int in_chw = channels * in_hw;
+ int out_chw = channels * out_hw;
+
+ T ratio_h = (out_h > 1) ? static_cast(in_h - 1) / (out_h - 1) : 0.f;
+ T ratio_w = (out_w > 1) ? static_cast(in_w - 1) / (out_w - 1) : 0.f;
+
+ if (in_h == out_h && in_w == out_w) {
+ memcpy(output, input, input_t->numel() * sizeof(T));
+ } else {
+ int threadNum = batch_size * out_chw;
+ int blocks = (threadNum + 1024 - 1) / 1024;
+
+ KeBilinearInterpFw<
+ T><<>>(
+ input, in_h, in_w, batch_size, in_chw, output, out_h, out_w,
+ batch_size, out_chw, channels, ratio_h, ratio_w);
+ }
+ }
+};
+
+template
+class BilinearInterpGradOpCUDAKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const override {
+ auto* d_input_t = ctx.Output(framework::GradVarName("X"));
+ auto* d_output_t = ctx.Input(framework::GradVarName("Out"));
+ auto* d_input = d_input_t->mutable_data(ctx.GetPlace());
+ auto* d_output = d_output_t->data();
+
+ auto& device_ctx =
+ ctx.template device_context();
+ math::SetConstant zero;
+ zero(device_ctx, d_input_t, static_cast(0.0));
+
+ int out_h = ctx.Attr("out_h");
+ int out_w = ctx.Attr("out_w");
+ int batch_size = d_input_t->dims()[0];
+ int channels = d_input_t->dims()[1];
+ int in_h = d_input_t->dims()[2];
+ int in_w = d_input_t->dims()[3];
+
+ int in_hw = in_h * in_w;
+ int out_hw = out_h * out_w;
+ int in_chw = channels * in_hw;
+ int out_chw = channels * out_hw;
+
+ T ratio_h = (out_h > 1) ? static_cast(in_h - 1) / (out_h - 1) : 0.f;
+ T ratio_w = (out_w > 1) ? static_cast(in_w - 1) / (out_w - 1) : 0.f;
+
+ if (in_h == out_h && in_w == out_w) {
+ memcpy(d_input, d_output, d_input_t->numel() * sizeof(T));
+ } else {
+ int threadNum = batch_size * out_chw;
+ int blocks = (threadNum + 1024 - 1) / 1024;
+
+ KeBilinearInterpBw<
+ T><<>>(
+ d_input, in_h, in_w, batch_size, in_chw, d_output, out_h, out_w,
+ batch_size, out_chw, channels, ratio_h, ratio_w);
+ }
+ }
+};
+
+} // namespace operators
+} // namespace paddle
+
+namespace ops = paddle::operators;
+REGISTER_OP_CUDA_KERNEL(bilinear_interp,
+ ops::BilinearInterpOpCUDAKernel);
+REGISTER_OP_CUDA_KERNEL(bilinear_interp_grad,
+ ops::BilinearInterpGradOpCUDAKernel);
diff --git a/paddle/fluid/operators/bilinear_interp_op.h b/paddle/fluid/operators/bilinear_interp_op.h
new file mode 100644
index 0000000000000000000000000000000000000000..f6cd77e4d49b53ecde6a84908cdffc7e1e02ac6a
--- /dev/null
+++ b/paddle/fluid/operators/bilinear_interp_op.h
@@ -0,0 +1,143 @@
+/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. */
+
+#pragma once
+#include "paddle/fluid/framework/op_registry.h"
+#include "paddle/fluid/operators/math/math_function.h"
+
+namespace paddle {
+namespace operators {
+
+using Tensor = framework::Tensor;
+
+template
+class BilinearInterpKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const override {
+ auto* input_t = ctx.Input("X"); // float tensor
+ auto* output_t = ctx.Output("Out"); // float tensor
+ auto* input = input_t->data();
+ auto* output = output_t->mutable_data(ctx.GetPlace());
+
+ int out_h = ctx.Attr("out_h");
+ int out_w = ctx.Attr("out_w");
+ int batch_size = input_t->dims()[0];
+ int channels = input_t->dims()[1];
+ int in_h = input_t->dims()[2];
+ int in_w = input_t->dims()[3];
+
+ int in_hw = in_h * in_w;
+ int out_hw = out_h * out_w;
+ int in_chw = channels * in_hw;
+ int out_chw = channels * out_hw;
+
+ T ratio_h = (out_h > 1) ? static_cast(in_h - 1) / (out_h - 1) : 0.f;
+ T ratio_w = (out_w > 1) ? static_cast(in_w - 1) / (out_w - 1) : 0.f;
+
+ if (in_h == out_h && in_w == out_w) {
+ memcpy(output, input, input_t->numel() * sizeof(T));
+ } else {
+ for (int k = 0; k < batch_size; ++k) { // loop for batches
+ for (int i = 0; i < out_h; ++i) { // loop for images
+ int h = ratio_h * i;
+ int hid = (h < in_h - 1) ? 1 : 0;
+ T h1lambda = ratio_h * i - h;
+ T h2lambda = 1 - h1lambda;
+
+ for (int j = 0; j < out_w; ++j) {
+ int w = ratio_w * j;
+ int wid = (w < in_w - 1) ? 1 : 0;
+ T w1lambda = ratio_w * j - w;
+ T w2lambda = 1 - w1lambda;
+ // calculate four position for bilinear interpolation
+ const T* in_pos = &input[k * in_chw + h * in_w + w];
+ T* out_pos = &output[k * out_chw + i * out_w + j];
+
+ for (int c = 0; c < channels; ++c) { // loop for channels
+ // bilinear interpolation
+ out_pos[0] =
+ h2lambda * (w2lambda * in_pos[0] + w1lambda * in_pos[wid]) +
+ h1lambda * (w2lambda * in_pos[hid * in_w] +
+ w1lambda * in_pos[hid * in_w + wid]);
+ in_pos += in_hw;
+ out_pos += out_hw;
+ }
+ }
+ }
+ }
+ }
+ }
+};
+
+template
+class BilinearInterpGradKernel : public framework::OpKernel {
+ public:
+ void Compute(const framework::ExecutionContext& ctx) const override {
+ auto* d_input_t = ctx.Output(framework::GradVarName("X"));
+ auto* d_output_t = ctx.Input(framework::GradVarName("Out"));
+ auto* d_input = d_input_t->mutable_data(ctx.GetPlace());
+ auto* d_output = d_output_t->data();
+
+ auto& device_ctx =
+ ctx.template device_context();
+ math::SetConstant zero;
+ zero(device_ctx, d_input_t, static_cast(0.0));
+
+ int out_h = ctx.Attr("out_h");
+ int out_w = ctx.Attr("out_w");
+ int batch_size = d_input_t->dims()[0];
+ int channels = d_input_t->dims()[1];
+ int in_h = d_input_t->dims()[2];
+ int in_w = d_input_t->dims()[3];
+
+ int in_hw = in_h * in_w;
+ int out_hw = out_h * out_w;
+ int in_chw = channels * in_hw;
+ int out_chw = channels * out_hw;
+
+ T ratio_h = (out_h > 1) ? static_cast(in_h - 1) / (out_h - 1) : 0.f;
+ T ratio_w = (out_w > 1) ? static_cast(in_w - 1) / (out_w - 1) : 0.f;
+
+ if (in_h == out_h && in_w == out_w) {
+ memcpy(d_input, d_output, d_input_t->numel() * sizeof(T));
+ } else {
+ for (int k = 0; k < batch_size; ++k) { // loop for batches
+ for (int i = 0; i < out_h; ++i) { // loop for images
+ int h = ratio_h * i;
+ int hid = (h < in_h - 1) ? 1 : 0;
+ T h1lambda = ratio_h * i - h;
+ T h2lambda = 1 - h1lambda;
+
+ for (int j = 0; j < out_w; ++j) {
+ int w = ratio_w * j;
+ int wid = (w < in_w - 1) ? 1 : 0;
+ T w1lambda = ratio_w * j - w;
+ T w2lambda = 1 - w1lambda;
+ T* in_pos = &d_input[k * in_chw + h * in_w + w];
+ const T* out_pos = &d_output[k * out_chw + i * out_w + j];
+
+ for (int c = 0; c < channels; ++c) { // loop for channels
+ in_pos[0] += h2lambda * w2lambda * out_pos[0];
+ in_pos[wid] += h2lambda * w1lambda * out_pos[0];
+ in_pos[hid * in_w] += h1lambda * w2lambda * out_pos[0];
+ in_pos[hid * in_w + wid] += h1lambda * w1lambda * out_pos[0];
+ in_pos += in_hw;
+ out_pos += out_hw;
+ }
+ }
+ }
+ }
+ }
+ }
+};
+
+} // namespace operators
+} // namespace paddle
diff --git a/paddle/fluid/operators/conditional_block_op.cc b/paddle/fluid/operators/conditional_block_op.cc
index 137fee99e82e5c7fad58a36ef49adb323f13f3a4..27f74a789beef02d31ebceb9b909e97ebd68232a 100644
--- a/paddle/fluid/operators/conditional_block_op.cc
+++ b/paddle/fluid/operators/conditional_block_op.cc
@@ -227,7 +227,7 @@ class ConditionalBlockGradMaker : public framework::SingleGradOpDescMaker {
grad_op->SetOutput(framework::GradVarName("X"), InputGrad("X", false));
grad_op->SetOutput(framework::GradVarName("Params"),
InputGrad("Params", false));
- grad_op->SetBlockAttr("sub_block", *this->grad_block_[0]);
+ grad_op->SetBlockAttr("sub_block", this->grad_block_[0]);
grad_op->SetAttr("is_scalar_condition", GetAttr("is_scalar_condition"));
return std::unique_ptr(grad_op);
}
diff --git a/paddle/fluid/operators/detail/grpc_client.h b/paddle/fluid/operators/detail/grpc_client.h
index 4425b19328f503eb7f9022916ed6452cdfea4eeb..f6229b71bc01a6de51f50f5fe880ada6e15e74dd 100644
--- a/paddle/fluid/operators/detail/grpc_client.h
+++ b/paddle/fluid/operators/detail/grpc_client.h
@@ -29,12 +29,12 @@ limitations under the License. */
#include "grpc++/support/byte_buffer.h"
#include "grpc++/support/slice.h"
#include "grpc/support/log.h"
+#include "paddle/fluid/framework/blocking_queue.h"
#include "paddle/fluid/framework/data_type.h"
#include "paddle/fluid/framework/lod_tensor.h"
#include "paddle/fluid/framework/scope.h"
#include "paddle/fluid/framework/selected_rows.h"
#include "paddle/fluid/operators/detail/sendrecvop_utils.h"
-#include "paddle/fluid/operators/detail/simple_block_queue.h"
namespace paddle {
namespace operators {
diff --git a/paddle/fluid/operators/detail/grpc_server.cc b/paddle/fluid/operators/detail/grpc_server.cc
index 119e146e078e476b2768a8495ea63e468f952fd2..8cee46cbb2d6a1002864916e250fb7ab30f91430 100644
--- a/paddle/fluid/operators/detail/grpc_server.cc
+++ b/paddle/fluid/operators/detail/grpc_server.cc
@@ -90,7 +90,7 @@ class RequestGet final : public RequestBase {
::grpc::ServerCompletionQueue* cq,
framework::Scope* scope,
const platform::DeviceContext* dev_ctx,
- SimpleBlockQueue* queue)
+ framework::BlockingQueue* queue)
: RequestBase(service, cq, dev_ctx),
responder_(&ctx_),
scope_(scope),
@@ -128,7 +128,7 @@ class RequestGet final : public RequestBase {
sendrecv::VariableMessage request_;
ServerAsyncResponseWriter<::grpc::ByteBuffer> responder_;
framework::Scope* scope_;
- SimpleBlockQueue* queue_;
+ framework::BlockingQueue* queue_;
};
class RequestPrefetch final : public RequestBase {
diff --git a/paddle/fluid/operators/detail/grpc_server.h b/paddle/fluid/operators/detail/grpc_server.h
index 452ff5e967c086340e065a1b6a4b8672c75a4a3d..a15c93b7830265a2bb22334b5bb5a0f8ee2f28f4 100644
--- a/paddle/fluid/operators/detail/grpc_server.h
+++ b/paddle/fluid/operators/detail/grpc_server.h
@@ -19,6 +19,7 @@ limitations under the License. */
#include
#include "grpc++/grpc++.h"
+#include "paddle/fluid/framework/blocking_queue.h"
#include "paddle/fluid/framework/executor.h"
#include "paddle/fluid/framework/lod_tensor.h"
#include "paddle/fluid/framework/program_desc.h"
@@ -29,7 +30,6 @@ limitations under the License. */
#include "paddle/fluid/operators/detail/send_recv.grpc.pb.h"
#include "paddle/fluid/operators/detail/send_recv.pb.h"
#include "paddle/fluid/operators/detail/sendrecvop_utils.h"
-#include "paddle/fluid/operators/detail/simple_block_queue.h"
namespace paddle {
namespace operators {
@@ -37,7 +37,7 @@ namespace detail {
typedef std::pair>
ReceivedMessage;
-typedef SimpleBlockQueue ReceivedQueue;
+typedef framework::BlockingQueue ReceivedQueue;
typedef std::pair MessageWithName;
class RequestBase;
@@ -99,7 +99,7 @@ class AsyncGRPCServer final {
const platform::DeviceContext *dev_ctx_;
// received variable from RPC, operators fetch variable from this queue.
- SimpleBlockQueue var_get_queue_;
+ framework::BlockingQueue var_get_queue_;
// client send variable to this queue.
ReceivedQueue var_recv_queue_;
diff --git a/paddle/fluid/operators/detail/sendrecvop_utils.cc b/paddle/fluid/operators/detail/sendrecvop_utils.cc
index 69fcffe9bc34006aef2e5a39227cf6d947e4615f..766bcf1ac5e06628638fcc8a305c00ab2795bbf2 100644
--- a/paddle/fluid/operators/detail/sendrecvop_utils.cc
+++ b/paddle/fluid/operators/detail/sendrecvop_utils.cc
@@ -39,7 +39,9 @@ void SerializeToByteBuffer(const std::string& name, framework::Variable* var,
// parallelism execution, need to know when to free the tensor.
DestroyCallback destroy_callback = [](void* backing) {};
- void* buf = malloc(1024);
+ auto buffer = std::unique_ptr(new char[1024]);
+ void* buf = buffer.get();
+
void* payload = nullptr;
size_t payload_size;
ProtoEncodeHelper e(static_cast(buf), 1024);
diff --git a/paddle/fluid/operators/detail/simple_block_queue.h b/paddle/fluid/operators/detail/simple_block_queue.h
deleted file mode 100644
index 69773e05df7ed76f31c26f4304693fec2e9aac9c..0000000000000000000000000000000000000000
--- a/paddle/fluid/operators/detail/simple_block_queue.h
+++ /dev/null
@@ -1,52 +0,0 @@
-/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License. */
-
-#pragma once
-
-#include // NOLINT
-#include
-#include // NOLINT
-
-namespace paddle {
-namespace operators {
-namespace detail {
-
-template
-class SimpleBlockQueue {
- private:
- std::mutex mutex_;
- std::condition_variable condition_;
- std::deque queue_;
-
- public:
- void Push(T const& value) {
- {
- std::unique_lock lock(this->mutex_);
- queue_.push_front(value);
- }
- this->condition_.notify_one();
- }
-
- T Pop() {
- std::unique_lock lock(this->mutex_);
- this->condition_.wait(lock, [=] { return !this->queue_.empty(); });
- T rc(std::move(this->queue_.back()));
- this->queue_.pop_back();
- return rc;
- }
-};
-
-} // namespace detail
-} // namespace operators
-} // namespace paddle
diff --git a/paddle/fluid/operators/gru_op.h b/paddle/fluid/operators/gru_op.h
index 1d5c291495c0f0c0d8da9ff6949888b4cbb6036d..53f844a6607bd2e98c53b53c23422f6b48e2ced6 100644
--- a/paddle/fluid/operators/gru_op.h
+++ b/paddle/fluid/operators/gru_op.h
@@ -56,8 +56,6 @@ class GRUKernel : public framework::OpKernel {
auto* hidden = context.Output("Hidden");
hidden->mutable_data(context.GetPlace());
- context.ShareLoD("Input", "Hidden");
-
auto hidden_dims = hidden->dims();
bool is_reverse = context.Attr("is_reverse");
diff --git a/paddle/fluid/operators/math/math_function.cc b/paddle/fluid/operators/math/math_function.cc
index 44fd739fb1d161c6c7d6ab1cc611c59220280a4e..b5ae41c8f9d7aeb8e410b795fb9fbbd57ec69d4b 100644
--- a/paddle/fluid/operators/math/math_function.cc
+++ b/paddle/fluid/operators/math/math_function.cc
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#include "paddle/fluid/operators/math/math_function.h"
+#include
#include "paddle/fluid/framework/data_type.h"
#include "paddle/fluid/operators/math/math_function_impl.h"
#include "paddle/fluid/platform/float16.h"
@@ -161,7 +162,8 @@ void batched_gemm(
const platform::CPUDeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const float16 alpha, const float16* A, const float16* B, const float16 beta,
- float16* C, const int batchCount, const int strideA, const int strideB) {
+ float16* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
PADDLE_THROW("float16 batched_gemm not supported on CPU");
}
@@ -172,7 +174,8 @@ void batched_gemm(
const platform::CPUDeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const float alpha, const float* A, const float* B, const float beta,
- float* C, const int batchCount, const int strideA, const int strideB) {
+ float* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
int lda = (transA == CblasNoTrans) ? K : M;
int ldb = (transB == CblasNoTrans) ? N : K;
int ldc = N;
@@ -194,7 +197,8 @@ void batched_gemm(
const platform::CPUDeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const double alpha, const double* A, const double* B, const double beta,
- double* C, const int batchCount, const int strideA, const int strideB) {
+ double* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
int lda = (transA == CblasNoTrans) ? K : M;
int ldb = (transB == CblasNoTrans) ? N : K;
int ldc = N;
@@ -220,7 +224,8 @@ void batched_gemm(
const platform::CPUDeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const float alpha, const float* A, const float* B, const float beta,
- float* C, const int batchCount, const int strideA, const int strideB) {
+ float* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
for (int k = 0; k < batchCount; ++k) {
const float* Ak = &A[k * strideA];
const float* Bk = &B[k * strideB];
@@ -235,7 +240,8 @@ void batched_gemm(
const platform::CPUDeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const double alpha, const double* A, const double* B, const double beta,
- double* C, const int batchCount, const int strideA, const int strideB) {
+ double* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
for (int k = 0; k < batchCount; ++k) {
const double* Ak = &A[k * strideA];
const double* Bk = &B[k * strideB];
diff --git a/paddle/fluid/operators/math/math_function.cu b/paddle/fluid/operators/math/math_function.cu
index 9badf26c9bb80acad029be3d1b63377cef63d929..2aa819625e0f5213a6001908e715bcc73d4747c3 100644
--- a/paddle/fluid/operators/math/math_function.cu
+++ b/paddle/fluid/operators/math/math_function.cu
@@ -13,6 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */
#define EIGEN_USE_GPU
+#include
#include "paddle/fluid/framework/data_type.h"
#include "paddle/fluid/operators/math/math_function.h"
#include "paddle/fluid/operators/math/math_function_impl.h"
@@ -267,7 +268,8 @@ void batched_gemm(
const platform::CUDADeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const float16 alpha, const float16* A, const float16* B, const float16 beta,
- float16* C, const int batchCount, const int strideA, const int strideB) {
+ float16* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
#if CUDA_VERSION >= 8000
// Note that cublas follows fortran order, so the order is different from
// the cblas convention.
@@ -278,7 +280,7 @@ void batched_gemm(
(transA == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;
cublasOperation_t cuTransB =
(transB == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;
- const int strideC = M * N;
+ const int64_t strideC = M * N;
const half h_alpha = static_cast(alpha);
const half h_beta = static_cast(beta);
@@ -303,7 +305,8 @@ void batched_gemm(
const platform::CUDADeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const float alpha, const float* A, const float* B, const float beta,
- float* C, const int batchCount, const int strideA, const int strideB) {
+ float* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
#if CUDA_VERSION >= 8000
// Note that cublas follows fortran order, so the order is different from
// the cblas convention.
@@ -314,7 +317,7 @@ void batched_gemm(
(transA == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;
cublasOperation_t cuTransB =
(transB == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;
- const int strideC = M * N;
+ const int64_t strideC = M * N;
PADDLE_ENFORCE(platform::dynload::cublasSgemmStridedBatched(
context.cublas_handle(), cuTransB, cuTransA, N, M, K, &alpha, B, ldb,
@@ -329,7 +332,8 @@ void batched_gemm(
const platform::CUDADeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N, const int K,
const double alpha, const double* A, const double* B, const double beta,
- double* C, const int batchCount, const int strideA, const int strideB) {
+ double* C, const int batchCount, const int64_t strideA,
+ const int64_t strideB) {
#if CUDA_VERSION >= 8000
// Note that cublas follows fortran order, so the order is different from
// the cblas convention.
@@ -340,7 +344,7 @@ void batched_gemm(
(transA == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;
cublasOperation_t cuTransB =
(transB == CblasNoTrans) ? CUBLAS_OP_N : CUBLAS_OP_T;
- const int strideC = M * N;
+ const int64_t strideC = M * N;
PADDLE_ENFORCE(platform::dynload::cublasDgemmStridedBatched(
context.cublas_handle(), cuTransB, cuTransA, N, M, K, &alpha, B, ldb,
diff --git a/paddle/fluid/operators/math/math_function.h b/paddle/fluid/operators/math/math_function.h
index cdbc7bfb37e83c6c2b696ba010277c9eec49f2a8..cdd02974722045457aacdfa517c147751185f332 100644
--- a/paddle/fluid/operators/math/math_function.h
+++ b/paddle/fluid/operators/math/math_function.h
@@ -26,7 +26,7 @@ limitations under the License. */
#ifndef LAPACK_FOUND
extern "C" {
-#include
+#include // NOLINT
int LAPACKE_sgetrf(int matrix_layout, int m, int n, float* a, int lda,
int* ipiv);
int LAPACKE_dgetrf(int matrix_layout, int m, int n, double* a, int lda,
@@ -39,6 +39,7 @@ int LAPACKE_dgetri(int matrix_layout, int n, double* a, int lda,
#endif
#include
+#include
#include "paddle/fluid/framework/eigen.h"
#include "paddle/fluid/framework/tensor.h"
@@ -78,8 +79,8 @@ template
void batched_gemm(const DeviceContext& context, const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB, const int M, const int N,
const int K, const T alpha, const T* A, const T* B,
- const T beta, T* C, const int batchCount, const int strideA,
- const int strideB);
+ const T beta, T* C, const int batchCount,
+ const int64_t strideA, const int64_t strideB);
template
void gemv(const DeviceContext& context, const bool trans_a, const int M,
diff --git a/paddle/fluid/operators/parallel_do_op.cc b/paddle/fluid/operators/parallel_do_op.cc
index b28c16b13fce30c6e9be9953009b53e722cf4885..ae34fe2184b43cc104c14672dec30efd3b0e9f3b 100644
--- a/paddle/fluid/operators/parallel_do_op.cc
+++ b/paddle/fluid/operators/parallel_do_op.cc
@@ -364,7 +364,7 @@ class ParallelDoGradOpDescMaker : public framework::SingleGradOpDescMaker {
}
}
grad->SetAttrMap(this->Attrs());
- grad->SetBlockAttr(kParallelBlock, *grad_block_[0]);
+ grad->SetBlockAttr(kParallelBlock, grad_block_[0]);
return std::unique_ptr(grad);
}
diff --git a/paddle/fluid/operators/recurrent_op.cc b/paddle/fluid/operators/recurrent_op.cc
index 00241e768217db0a611c00bbc72e2fb83ade73b4..72c2905872c528a7ed05820744f4031799ad9e46 100644
--- a/paddle/fluid/operators/recurrent_op.cc
+++ b/paddle/fluid/operators/recurrent_op.cc
@@ -596,7 +596,7 @@ class RecurrentGradOpDescMaker : public framework::SingleGradOpDescMaker {
}
}
grad->SetAttrMap(this->Attrs());
- grad->SetBlockAttr(kStepBlock, *grad_block_[0]);
+ grad->SetBlockAttr(kStepBlock, grad_block_[0]);
return std::unique_ptr(grad);
}
diff --git a/paddle/fluid/operators/sequence_conv_op.h b/paddle/fluid/operators/sequence_conv_op.h
index b59504bb9893b720247841bdad5aa577992b7fb6..3916cdbb6a69c5a18f7a21ec60bad2732b4c3e58 100644
--- a/paddle/fluid/operators/sequence_conv_op.h
+++ b/paddle/fluid/operators/sequence_conv_op.h
@@ -33,7 +33,6 @@ class SequenceConvKernel : public framework::OpKernel {
auto filter = *context.Input("Filter");
out->mutable_data(context.GetPlace());
- context.ShareLoD("X", "Out");
int context_start = context.Attr("contextStart");
int context_length = context.Attr("contextLength");
diff --git a/paddle/fluid/operators/softmax_mkldnn_op.cc b/paddle/fluid/operators/softmax_mkldnn_op.cc
index d00bd1447e6114b6000b65799abb566a2a510127..71b541d98f6e0d3e12601c9988ca6ffb8bb7554d 100644
--- a/paddle/fluid/operators/softmax_mkldnn_op.cc
+++ b/paddle/fluid/operators/softmax_mkldnn_op.cc
@@ -77,7 +77,7 @@ class SoftmaxMKLDNNKernel : public paddle::framework::OpKernel {
const bool is_test = ctx.Attr("is_test");
if (!is_test) {
T threshold = exp(-64);
- for (size_t i = 0; i < dst_tz[0] * dst_tz[1]; ++i) {
+ for (int i = 0; i < dst_tz[0] * dst_tz[1]; ++i) {
output_data[i] =
output_data[i] < threshold ? threshold : output_data[i];
}
diff --git a/paddle/fluid/operators/while_op.cc b/paddle/fluid/operators/while_op.cc
index 8b62b242cf8745378eb216db10605388b294ca75..710cc9fc2e716da2e4fd067562a34d312e48b1a1 100644
--- a/paddle/fluid/operators/while_op.cc
+++ b/paddle/fluid/operators/while_op.cc
@@ -288,7 +288,7 @@ class WhileGradOpDescMaker : public framework::SingleGradOpDescMaker {
while_grad->SetInput(framework::GradVarName(kOutputs), output_grads_list);
while_grad->SetAttrMap(this->Attrs());
- while_grad->SetBlockAttr(kStepBlock, *grad_block);
+ while_grad->SetBlockAttr(kStepBlock, grad_block);
// record the original output gradient names, since the gradient name of
// while operator could be renamed.
while_grad->SetAttr("original_output_grad", output_grads_list);
diff --git a/paddle/fluid/platform/CMakeLists.txt b/paddle/fluid/platform/CMakeLists.txt
index 917bdc64abf608b8ade70c47f76a8adffb32046a..598fd4d419078a973647f2f8f20e8a12c8115a8b 100644
--- a/paddle/fluid/platform/CMakeLists.txt
+++ b/paddle/fluid/platform/CMakeLists.txt
@@ -12,7 +12,7 @@ add_custom_command(TARGET profiler_py_proto POST_BUILD
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
if(WITH_GPU)
- cc_library(enforce SRCS enforce.cc DEPS)
+ nv_library(enforce SRCS enforce.cc)
else()
cc_library(enforce SRCS enforce.cc)
endif()
diff --git a/paddle/scripts/docker/README.md b/paddle/scripts/README.md
similarity index 66%
rename from paddle/scripts/docker/README.md
rename to paddle/scripts/README.md
index 78c0cc378231f763597556cc5450f6f03ab2b291..9e8b135c1bc7fc05d88fe6f3bed17dd3b48e9615 100644
--- a/paddle/scripts/docker/README.md
+++ b/paddle/scripts/README.md
@@ -13,40 +13,49 @@ We want to make the building procedures:
1. Build docker images with PaddlePaddle pre-installed, so that we can run
PaddlePaddle applications directly in docker or on Kubernetes clusters.
-To achieve this, we created a repo: https://github.com/PaddlePaddle/buildtools
-which gives several docker images that are `manylinux1` sufficient. Then we
-can build PaddlePaddle using these images to generate corresponding `whl`
-binaries.
+To achieve this, we maintain a dockerhub repo:https://hub.docker.com/r/paddlepaddle/paddle
+which provides pre-built environment images to build PaddlePaddle and generate corresponding `whl`
+binaries.(**We strongly recommend building paddlepaddle in our pre-specified Docker environment.**)
-## Run The Build
+## Development Workflow
+
+Here we describe how the workflow goes on. We start from considering our daily development environment.
+
+Developers work on a computer, which is usually a laptop or desktop:
+
+
+
+or, they might rely on a more sophisticated box (like with GPUs):
+
+
+
+A principle here is that source code lies on the development computer (host) so that editors like Eclipse can parse the source code to support auto-completion.
+
+## Build With Docker
### Build Environments
-The pre-built build environment images are:
+The lastest pre-built build environment images are:
| Image | Tag |
| ----- | --- |
-| paddlepaddle/paddle_manylinux_devel | cuda7.5_cudnn5 |
-| paddlepaddle/paddle_manylinux_devel | cuda8.0_cudnn5 |
-| paddlepaddle/paddle_manylinux_devel | cuda7.5_cudnn7 |
-| paddlepaddle/paddle_manylinux_devel | cuda9.0_cudnn7 |
+| paddlepaddle/paddle | latest-dev |
+| paddlepaddle/paddle | latest-dev-android |
### Start Build
-Choose one docker image that suit your environment and run the following
-command to start a build:
-
```bash
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
-docker run --rm -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "WITH_TESTING=OFF" -e "RUN_TEST=OFF" -e "PYTHON_ABI=cp27-cp27mu" paddlepaddle/paddle_manylinux_devel /paddle/paddle/scripts/docker/build.sh
+./paddle/scripts/paddle_docker_build.sh build
```
After the build finishes, you can get output `whl` package under
`build/python/dist`.
-This command mounts the source directory on the host into `/paddle` in the container, then run the build script `/paddle/paddle/scripts/docker/build.sh`
-in the container. When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed.
+This command will download the most recent dev image from docker hub, start a container in the backend and then run the build script `/paddle/paddle/scripts/paddle_build.sh build` in the container.
+The container mounts the source directory on the host into `/paddle`.
+When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed.
### Build Options
@@ -68,7 +77,6 @@ Users can specify the following Docker build arguments with either "ON" or "OFF"
| `WITH_DOC` | OFF | Build docs after build binaries. |
| `WOBOQ` | OFF | Generate WOBOQ code viewer under `build/woboq_out` |
-
## Docker Images
You can get the latest PaddlePaddle docker images by
@@ -144,59 +152,37 @@ docker push
kubectl ...
```
-## Docker Images for Developers
-
-We have a special docker image for developers:
-`paddlepaddle/paddle:-dev`. This image is also generated from
-https://github.com/PaddlePaddle/buildtools
-
-This a development image contains only the
-development tools and standardizes the building procedure. Users include:
-
-- developers -- no longer need to install development tools on the host, and can build their current work on the host (development computer).
-- release engineers -- use this to build the official release from certain branch/tag on Github.com.
-- document writers / Website developers -- Our documents are in the source repo in the form of .md/.rst files and comments in source code. We need tools to extract the information, typeset, and generate Web pages.
-
-Of course, developers can install building tools on their development computers. But different versions of PaddlePaddle might require different set or version of building tools. Also, it makes collaborative debugging easier if all developers use a unified development environment.
-
-The development image contains the following tools:
-
- - gcc/clang
- - nvcc
- - Python
- - sphinx
- - woboq
- - sshd
-
-Many developers work on a remote computer with GPU; they could ssh into the computer and `docker exec` into the development container. However, running `sshd` in the container allows developers to ssh into the container directly.
-
-
-### Development Workflow
-
-Here we describe how the workflow goes on. We start from considering our daily development environment.
+### Reading source code with woboq codebrowser
-Developers work on a computer, which is usually a laptop or desktop:
+For developers who are interested in the C++ source code, you can build C++ source code into HTML pages using [Woboq codebrowser](https://github.com/woboq/woboq_codebrowser).
-
+- The following command builds PaddlePaddle, generates HTML pages from C++ source code, and writes HTML pages into `$HOME/woboq_out` on the host:
-or, they might rely on a more sophisticated box (like with GPUs):
+```bash
+./paddle/scripts/paddle_docker_build.sh html
+```
-
+- You can open the generated HTML files in your Web browser. Or, if you want to run a Nginx container to serve them for a wider audience, you can run:
-A principle here is that source code lies on the development computer (host) so that editors like Eclipse can parse the source code to support auto-completion.
+```
+docker run -v $HOME/woboq_out:/usr/share/nginx/html -d -p 8080:80 nginx
+```
-### Reading source code with woboq codebrowser
+## More Options
-For developers who are interested in the C++ source code, please use -e "WOBOQ=ON" to enable the building of C++ source code into HTML pages using [Woboq codebrowser](https://github.com/woboq/woboq_codebrowser).
+### Build Without Docker
-- The following command builds PaddlePaddle, generates HTML pages from C++ source code, and writes HTML pages into `$HOME/woboq_out` on the host:
+Follow the *Dockerfile* in the paddlepaddle repo to set up your local dev environment and run:
```bash
-docker run -v $PWD:/paddle -v $HOME/woboq_out:/woboq_out -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "WITH_TESTING=ON" -e "WOBOQ=ON" paddlepaddle/paddle:latest-dev
+./paddle/scripts/paddle_build.sh build
```
-- You can open the generated HTML files in your Web browser. Or, if you want to run a Nginx container to serve them for a wider audience, you can run:
+### Additional Tasks
-```
-docker run -v $HOME/woboq_out:/usr/share/nginx/html -d -p 8080:80 nginx
+You can get the help menu for the build scripts by running with no options:
+
+```bash
+./paddle/scripts/paddle_build.sh
+or ./paddle/scripts/paddle_docker_build.sh
```
diff --git a/paddle/scripts/docker/doc/paddle-development-environment-gpu.graffle b/paddle/scripts/doc/paddle-development-environment-gpu.graffle
similarity index 100%
rename from paddle/scripts/docker/doc/paddle-development-environment-gpu.graffle
rename to paddle/scripts/doc/paddle-development-environment-gpu.graffle
diff --git a/paddle/scripts/docker/doc/paddle-development-environment-gpu.png b/paddle/scripts/doc/paddle-development-environment-gpu.png
similarity index 100%
rename from paddle/scripts/docker/doc/paddle-development-environment-gpu.png
rename to paddle/scripts/doc/paddle-development-environment-gpu.png
diff --git a/paddle/scripts/docker/doc/paddle-development-environment.graffle b/paddle/scripts/doc/paddle-development-environment.graffle
similarity index 100%
rename from paddle/scripts/docker/doc/paddle-development-environment.graffle
rename to paddle/scripts/doc/paddle-development-environment.graffle
diff --git a/paddle/scripts/docker/doc/paddle-development-environment.png b/paddle/scripts/doc/paddle-development-environment.png
similarity index 100%
rename from paddle/scripts/docker/doc/paddle-development-environment.png
rename to paddle/scripts/doc/paddle-development-environment.png
diff --git a/paddle/scripts/paddle_build.sh b/paddle/scripts/paddle_build.sh
new file mode 100755
index 0000000000000000000000000000000000000000..654c8272a18e5adb01e75be94985a80502ba2c8d
--- /dev/null
+++ b/paddle/scripts/paddle_build.sh
@@ -0,0 +1,508 @@
+#!/usr/bin/env bash
+
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#=================================================
+# Utils
+#=================================================
+
+function print_usage() {
+ RED='\033[0;31m'
+ BLUE='\033[0;34m'
+ BOLD='\033[1m'
+ NONE='\033[0m'
+
+ echo -e "\n${RED}Usage${NONE}:
+ ${BOLD}$0${NONE} [OPTION]"
+
+ echo -e "\n${RED}Options${NONE}:
+ ${BLUE}build${NONE}: run build for x86 platform
+ ${BLUE}build_android${NONE}: run build for android platform
+ ${BLUE}build_ios${NONE}: run build for ios platform
+ ${BLUE}test${NONE}: run all unit tests
+ ${BLUE}bind_test${NONE}: parallel tests bind to different GPU
+ ${BLUE}doc${NONE}: generate paddle documents
+ ${BLUE}html${NONE}: convert C++ source code into HTML
+ ${BLUE}dockerfile${NONE}: generate paddle release dockerfile
+ ${BLUE}capi${NONE}: generate paddle CAPI package
+ ${BLUE}fluid_inference_lib${NONE}: deploy fluid inference library
+ ${BLUE}check_style${NONE}: run code style check
+ "
+}
+
+function init() {
+ PADDLE_ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}")/../../" && pwd )"
+}
+
+function cmake_gen() {
+ mkdir -p ${PADDLE_ROOT}/build
+ cd ${PADDLE_ROOT}/build
+
+ # build script will not fail if *.deb does not exist
+ rm *.deb 2>/dev/null || true
+ # delete previous built whl packages
+ rm -rf python/dist 2>/dev/null || true
+
+ # Support build for all python versions, currently
+ # including cp27-cp27m and cp27-cp27mu.
+ PYTHON_FLAGS=""
+ if [ "$1" != "" ]; then
+ echo "using python abi: $1"
+ if [ "$1" == "cp27-cp27m" ]; then
+ export LD_LIBRARY_PATH=/opt/_internal/cpython-2.7.11-ucs2/lib:${LD_LIBRARY_PATH#/opt/_internal/cpython-2.7.11-ucs4/lib:}
+ export PATH=/opt/python/cp27-cp27m/bin/:${PATH}
+ PYTHON_FLAGS="-DPYTHON_EXECUTABLE:FILEPATH=/opt/python/cp27-cp27m/bin/python
+ -DPYTHON_INCLUDE_DIR:PATH=/opt/python/cp27-cp27m/include/python2.7
+ -DPYTHON_LIBRARIES:FILEPATH=/opt/_internal/cpython-2.7.11-ucs2/lib/libpython2.7.so"
+ elif [ "$1" == "cp27-cp27mu" ]; then
+ export LD_LIBRARY_PATH=/opt/_internal/cpython-2.7.11-ucs4/lib:${LD_LIBRARY_PATH#/opt/_internal/cpython-2.7.11-ucs2/lib:}
+ export PATH=/opt/python/cp27-cp27mu/bin/:${PATH}
+ PYTHON_FLAGS="-DPYTHON_EXECUTABLE:FILEPATH=/opt/python/cp27-cp27mu/bin/python
+ -DPYTHON_INCLUDE_DIR:PATH=/opt/python/cp27-cp27mu/include/python2.7
+ -DPYTHON_LIBRARIES:FILEPATH=/opt/_internal/cpython-2.7.11-ucs4/lib/libpython2.7.so"
+ fi
+ fi
+
+ cat <&2
+ echo "Please use pre-commit to check what is wrong." 1>&2
+ exit 1
+}
+
+function check_style() {
+ trap 'abort' 0
+ set -e
+
+ # install glide
+ curl https://glide.sh/get | bash
+ eval "$(GIMME_GO_VERSION=1.8.3 gimme)"
+
+ # set up go environment for running gometalinter
+ mkdir -p $GOPATH/src/github.com/PaddlePaddle/
+ ln -sf ${PADDLE_ROOT} $GOPATH/src/github.com/PaddlePaddle/Paddle
+ cd $GOPATH/src/github.com/PaddlePaddle/Paddle/go; glide install; cd -
+
+ go get github.com/alecthomas/gometalinter
+ gometalinter --install
+
+ cd ${PADDLE_ROOT}
+ export PATH=/usr/bin:$PATH
+ pre-commit install
+ clang-format --version
+
+ if ! pre-commit run -a ; then
+ git diff
+ exit 1
+ fi
+
+ trap : 0
+}
+
+#=================================================
+# Build
+#=================================================
+
+function build() {
+ mkdir -p ${PADDLE_ROOT}/build
+ cd ${PADDLE_ROOT}/build
+ cat <= 21."
+ ANDROID_API=21
+ fi
+ else # armeabi, armeabi-v7a
+ ANDROID_ARCH=arm
+ fi
+
+ ANDROID_STANDALONE_TOOLCHAIN=$ANDROID_TOOLCHAINS_DIR/$ANDROID_ARCH-android-$ANDROID_API
+
+ cat < ${PADDLE_ROOT}/build/Dockerfile <
+ ENV HOME /root
+EOF
+
+ if [[ ${WITH_GPU} == "ON" ]]; then
+ NCCL_DEPS="apt-get install -y libnccl2=2.1.2-1+cuda8.0 libnccl-dev=2.1.2-1+cuda8.0 &&"
+ else
+ NCCL_DEPS=""
+ fi
+
+ if [[ ${WITH_FLUID_ONLY:-OFF} == "OFF" ]]; then
+ PADDLE_VERSION="paddle version"
+ CMD='"paddle", "version"'
+ else
+ PADDLE_VERSION="true"
+ CMD='"true"'
+ fi
+
+ cat >> /paddle/build/Dockerfile < /dev/null
+ return $?
+}
+
+function start_build_docker() {
+ docker pull $IMG
+
+ if container_running "${CONTAINER_ID}"; then
+ docker stop "${CONTAINER_ID}" 1>/dev/null
+ docker rm -f "${CONTAINER_ID}" 1>/dev/null
+ fi
+
+ DOCKER_ENV=$(cat < 1:
+ ratio_h = (in_h - 1.0) / (out_h - 1.0)
+ else:
+ ratio_h = 0.0
+ if out_w > 1:
+ ratio_w = (in_w - 1.0) / (out_w - 1.0)
+ else:
+ ratio_w = 0.0
+
+ out = np.zeros((batch_size, channel, out_h, out_w))
+ for i in range(out_h):
+ h = int(ratio_h * i)
+ hid = 1 if h < in_h - 1 else 0
+ h1lambda = ratio_h * i - h
+ h2lambda = 1.0 - h1lambda
+ for j in range(out_w):
+ w = int(ratio_w * j)
+ wid = 1 if w < in_w - 1 else 0
+ w1lambda = ratio_w * j - w
+ w2lambda = 1.0 - w1lambda
+
+ out[:, :, i, j] = h2lambda*(w2lambda*input[:, :, h, w] +
+ w1lambda*input[:, :, h, w+wid]) + \
+ h1lambda*(w2lambda*input[:, :, h+hid, w] +
+ w1lambda*input[:, :, h+hid, w+wid])
+ return out.astype("float32")
+
+
+class TestBilinearInterpOp(OpTest):
+ def setUp(self):
+ self.init_test_case()
+ self.op_type = "bilinear_interp"
+ input_np = np.random.random(self.input_shape).astype("float32")
+ output_np = bilinear_interp_np(input_np, self.out_h, self.out_w)
+
+ self.inputs = {'X': input_np}
+ self.attrs = {'out_h': self.out_h, 'out_w': self.out_w}
+ self.outputs = {'Out': output_np}
+
+ def test_check_output(self):
+ self.check_output()
+
+ def test_check_grad(self):
+ self.check_grad(['X'], 'Out', in_place=True)
+
+ def init_test_case(self):
+ self.input_shape = [2, 3, 4, 4]
+ self.out_h = 2
+ self.out_w = 2
+
+
+class TestCase1(TestBilinearInterpOp):
+ def init_test_case(self):
+ self.input_shape = [4, 1, 7, 8]
+ self.out_h = 1
+ self.out_w = 1
+
+
+class TestCase2(TestBilinearInterpOp):
+ def init_test_case(self):
+ self.input_shape = [3, 3, 9, 6]
+ self.out_h = 12
+ self.out_w = 12
+
+
+class TestCase3(TestBilinearInterpOp):
+ def init_test_case(self):
+ self.input_shape = [1, 1, 128, 64]
+ self.out_h = 64
+ self.out_w = 128
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/tools/aws_benchmarking/README.md b/tools/aws_benchmarking/README.md
index 22a468466afbcbf7cc312e714e41a3b5adf1160c..4fdd4b0de44e779378091566d9d6056a6f9ee4b6 100644
--- a/tools/aws_benchmarking/README.md
+++ b/tools/aws_benchmarking/README.md
@@ -77,10 +77,10 @@ Training nodes will run your `ENTRYPOINT` script with the following environment
Now let's start the training process:
```bash
-docker run -i -v $HOME/.aws:/root/.aws -v :/root/.pem \
+docker run -i -v $HOME/.aws:/root/.aws -v :/root/.pem \
putcn/paddle_aws_client \
--action create \
---key_name \
+--key_name \
--security_group_id \
--docker_image myreponame/paddle_benchmark \
--pserver_count 2 \
@@ -154,8 +154,31 @@ Master exposes 4 major services:
### Parameters
-TBD, please refer to client/cluster_launcher.py for now
+ - key_name: required, aws key pair name
+ - security_group_id: required, the security group id associated with your VPC
+ - vpc_id: The VPC in which you wish to run test, if not provided, this tool will use your default VPC.
+ - subnet_id: The Subnet_id in which you wish to run test, if not provided, this tool will create a new sub net to run test.
+ - pserver_instance_type: your pserver instance type, c5.2xlarge by default, which is a memory optimized machine.
+ - trainer_instance_type: your trainer instance type, p2.8xlarge by default, which is a GPU machine with 8 cards.
+ - task_name: the name you want to identify your job, if not provided, this tool will generate one for you.
+ - pserver_image_id: ami id for system image. Please note, although the default one has nvidia-docker installed, pserver is always launched with `docker` instead of `nvidia-docker`, please DO NOT init your training program with GPU place.
+ - pserver_command: pserver start command, format example: python,vgg.py,batch_size:128,is_local:no, which will be translated as `python vgg.py --batch_size 128 --is_local no` when trying to start the training in pserver. "--device CPU" is passed as default.
+ - trainer_image_id: ami id for system image, default one has nvidia-docker ready.
+ - trainer_command: trainer start command. Format is the same as pserver's, "--device GPU" is passed as default.
+ - availability_zone: aws zone id to place ec2 instances, us-east-2a by default.
+ - trainer_count: Trainer count, 1 by default.
+ - pserver_count: Pserver count, 1 by default.
+ - action: create|cleanup|status, "create" by default.
+ - pserver_port: the port for pserver to open service, 5436 by default.
+ - docker_image: the training docker image id.
+ - master_service_port: the port for master to open service, 5436 by default.
+ - master_server_public_ip: the master service ip, this is required when action is not "create"
+ - master_docker_image: master's docker image id, "putcn/paddle_aws_master:latest" by default
+ - no_clean_up: no instance termination when training is finished or failed when this value is set "yes". This is for debug purpose, so that you can inspect into the instances when the process is finished.
+
### Trouble shooting
-TBD
+ 1. How to check logs
+
+ Master log is served at `http://:/status`, and you can list all the log files from `http://