diff --git a/README.md b/README.md index bbb2d498589092de78b21a662f03171a0721f840..ceeb6d9e5193763293d3fce76e464340fbce533f 100644 --- a/README.md +++ b/README.md @@ -2,8 +2,8 @@ [![Build Status](https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop)](https://travis-ci.org/PaddlePaddle/Paddle) -[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](http://doc.paddlepaddle.org/develop/doc/) -[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](http://doc.paddlepaddle.org/develop/doc_cn/) +[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](http://www.paddlepaddle.org/docs/develop/documentation/en/getstarted/index_en.html) +[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](http://www.paddlepaddle.org/docs/develop/documentation/zh/getstarted/index_cn.html) [![Coverage Status](https://coveralls.io/repos/github/PaddlePaddle/Paddle/badge.svg?branch=develop)](https://coveralls.io/github/PaddlePaddle/Paddle?branch=develop) [![Release](https://img.shields.io/github/release/PaddlePaddle/Paddle.svg)](https://github.com/PaddlePaddle/Paddle/releases) [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) diff --git a/doc/design/fluid-compiler.graffle b/doc/design/fluid-compiler.graffle new file mode 100644 index 0000000000000000000000000000000000000000..c933df2cb855462c52b2d25f7f9a99b95652961d Binary files /dev/null and b/doc/design/fluid-compiler.graffle differ diff --git a/doc/design/fluid-compiler.png b/doc/design/fluid-compiler.png new file mode 100644 index 0000000000000000000000000000000000000000..1b0ffed2039c91a3a00bbb719da08c91c3acf7bb Binary files /dev/null and b/doc/design/fluid-compiler.png differ diff --git a/doc/design/fluid.md b/doc/design/fluid.md new file mode 100644 index 0000000000000000000000000000000000000000..585dc8ef39c0cfb30f470d79f7b27a59ceb5e940 --- /dev/null +++ b/doc/design/fluid.md @@ -0,0 +1,122 @@ +# Design Doc: PaddlePaddle Fluid + +## Why Fluid + +When Baidu developed PaddlePaddle in 2013, the only well-known open source deep learning system at the time was Caffe. However, when PaddlePaddle was open-sourced in 2016, many other choices were available. There was a challenge -- what is the need for open sourcing yet another deep learning framework? + +Fluid is the answer. Fluid is similar to PyTorch and TensorFlow Eager Execution, which describes the "process" of training or inference using the concept of a model. In fact in PyTorch, TensorFlow Eager Execution and Fluid, there is no concept of a model at all. The details are covered in the sections below. Fluid is currently more extreme in the above mentioned idea than PyTorch and Eager Execution, and we are trying to push Fluid towards the directions of a compiler and a new programming language for deep learning. + +## The Evolution of Deep Learning Systems + +Deep learning infrastructure is one of the fastest evolving technologies. Within four years, there have already been three generations of technologies invented. + +| Existed since | model as sequence of layers | model as graph of operators | No model | +|--|--|--|--| +| 2013 | Caffe, Theano, Torch, PaddlePaddle | | | +| 2015 | | TensorFlow, MxNet, Caffe2, ONNX, n-graph | | +| 2016 | | | PyTorch, TensorFlow Eager Execution, PaddlePaddle Fluid | + +From the above table, we see that the deep learning technology is evolving towards getting rid of the concept of a model. To understand the reasons behind this direction, a comparison of the *programming paradigms* or the ways to program deep learning applications using these systems, would be helpful. The following section goes over these. + +## Deep Learning Programming Paradigms + +With the systems listed as the first or second generation, e.g., Caffe or TensorFlow, an AI application training program looks like the following: + +```python +x = layer.data("image") +l = layer.data("label") +f = layer.fc(x, W) +s = layer.softmax(f) +c = layer.mse(l, s) + +for i in xrange(1000): # train for 1000 iterations + m = read_minibatch() + forward({input=x, data=m}, minimize=c) + backward(...) + +print W # print the trained model parameters. +``` + +The above program includes two parts: + +1. The first part describes the model, and +2. The second part describes the training process (or inference process) for the model. + +This paradigm has a well-known problem that limits the productivity of programmers. If the programmer made a mistake in configuring the model, the error messages wouldn't show up until the second part is executed and `forward` and `backward` propagations are performed. This makes it difficult for the programmer to debug and locate a mistake that is located blocks away from the actual error prompt. + +This problem of being hard to debug and re-iterate fast on a program is the primary reason that programmers, in general, prefer PyTorch over the older systems. Using PyTorch, we would write the above program as following: + +```python +W = tensor(...) + +for i in xrange(1000): # train for 1000 iterations + m = read_minibatch() + x = m["image"] + l = m["label"] + f = layer.fc(x, W) + s = layer.softmax(f) + c = layer.mse(l, s) + backward() + +print W # print the trained model parameters. +``` + +We can see that the main difference is the moving the model configuration part (the first step) into the training loop. This change would allow the mistakes in model configuration to be reported where they actually appear in the programming block. This change also represents the model better, or its forward pass, by keeping the configuration process in the training loop. + +## Describe Arbitrary Models for the Future + +Describing the process instead of the model also brings Fluid, the flexibility to define different non-standard models that haven't been invented yet. + +As we write out the program for the process, we can write an RNN as a loop, instead of an RNN as a layer or as an operator. A PyTorch example would look like the following: + +```python +for i in xrange(1000): + m = read_minibatch() + x = m["sentence"] + for t in xrange x.len(): + h[t] = the_step(x[t]) +``` + +With Fluid, the training loop and the RNN in the above program are not really Python loops, but just a "loop structure" provided by Fluid and implemented in C++ as the following: + +```python +train_loop = layers.While(cond) +with train_loop.block(): + m = read_minibatch() + x = m["sentence"] + rnn = layers.While(...) + with rnn.block(): + h[t] = the_step(input[t]) +``` + +An actual Fluid example is described [here](https://github.com/PaddlePaddle/Paddle/blob/a91efdde6910ce92a78e3aa7157412c4c88d9ee8/python/paddle/v2/fluid/tests/test_while_op.py#L36-L44). + +From the example, the Fluid programs look very similar to their PyTorch equivalent programs, except that Fluid's loop structure, wrapped with Python's `with` statement, could run much faster than just a Python loop. + +We have more examples of the [`if-then-else`](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/if_else_op.md) structure of Fluid. + +## Turing Completeness + +In computability theory, a system of data-manipulation rules, such as a programming language, is said to be Turing complete if it can be used to simulate any Turing machine. For a programming language, if it provides if-then-else and loop, it is Turing complete. From the above examples, Fluid seems to be Turing complete; however, it is noteworthy to notice that there is a slight difference between the `if-then-else` of Fluid and that of a programming language. The difference being that the former runs both of its branches and splits the input mini-batch into two -- one for the True condition and another for the False condition. This hasn't been researched in depth if this is equivalent to the `if-then-else` in programming languages that makes them Turing-complete. Based on a conversation with [Yuang Yu](https://research.google.com/pubs/104812.html), it seems to be the case but this needs to be looked into in-depth. + +## The Execution of a Fluid Program + +There are two ways to execute a Fluid program. When a program is executed, it creates a protobuf message [`ProgramDesc`](https://github.com/PaddlePaddle/Paddle/blob/a91efdde6910ce92a78e3aa7157412c4c88d9ee8/paddle/framework/framework.proto#L145) that describes the process and is conceptually like an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). + +There is a C++ class [`Executor`](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/executor.h), which runs a `ProgramDesc`, similar to how an interpreter runs a Python program. + +Fluid is moving towards the direction of a compiler, which is explain in more detail later in this article. + +## Backward Compatibility of Fluid + +Given all the advantages from the removal of the concept of a *model*, hardware manufacturers might still prefer the existence of the concept of a model, so it would be easier for them to support multiple frameworks all at once and could run a trained model during inference. For example, Nervana, a startup company acquired by Intel, has been working on an XPU that reads the models in the format known as [n-graph](https://github.com/NervanaSystems/ngraph). Similarly, [Movidius](https://www.movidius.com/) is producing a mobile deep learning chip that reads and runs graphs of operators. The well-known [ONNX](https://github.com/onnx/onnx) is also a file format of graphs of operators. + +For Fluid, we can write a converter that extracts the parts in the `ProgramDesc` protobuf message, converts them into a graph of operators, and exports the graph into the ONNX or n-graph format. + +## Towards a Deep Learning Language and the Compiler + +We can change the `if-then-else` and loop structure a little bit in the above Fluid example programs, to make it into a new programming language, different than Python. + +Even if we do not invent a new language, as long as we get the `ProgramDesc` message filled in, we can write a transpiler, which translates each invocation to an operator, into a C++ call to a kernel function of that operator. For example, a transpiler that weaves the CUDA kernels outputs an NVIDIA-friendly C++ program, which can be built using `nvcc`. Another transpiler could generate MKL-friendly code that should be built using `icc` from Intel. More interestingly, we can translate a Fluid program into its distributed version of two `ProgramDesc` messages, one for running on the trainer process, and the other one for the parameter server. For more details of the last example, the [concurrent programming design](concurrent_programming.md) document would be a good pointer. The following figure explains the proposed two-stage process: + +![](fluid-compiler.png) diff --git a/doc/design/images/multigpu_allreduce.graffle b/doc/design/images/multigpu_allreduce.graffle new file mode 100644 index 0000000000000000000000000000000000000000..cb5bc420ceafe8ba4c87694d44ee4e5e4ad06779 Binary files /dev/null and b/doc/design/images/multigpu_allreduce.graffle differ diff --git a/doc/design/images/multigpu_allreduce.png b/doc/design/images/multigpu_allreduce.png new file mode 100644 index 0000000000000000000000000000000000000000..87a1b3e8f6dd4a713ec9df9f0037d1da04e9178a Binary files /dev/null and b/doc/design/images/multigpu_allreduce.png differ diff --git a/doc/design/images/multigpu_before_convert.graffle b/doc/design/images/multigpu_before_convert.graffle new file mode 100644 index 0000000000000000000000000000000000000000..6c35ab1b21fb76ceae82d3693ed0d085b5bc0855 Binary files /dev/null and b/doc/design/images/multigpu_before_convert.graffle differ diff --git a/doc/design/images/multigpu_before_convert.png b/doc/design/images/multigpu_before_convert.png new file mode 100644 index 0000000000000000000000000000000000000000..9c8f7711165d80a2fa3911280fdee91855a401b1 Binary files /dev/null and b/doc/design/images/multigpu_before_convert.png differ diff --git a/doc/design/mkldnn/image/engine.png b/doc/design/mkl/image/engine.png similarity index 100% rename from doc/design/mkldnn/image/engine.png rename to doc/design/mkl/image/engine.png diff --git a/doc/design/mkldnn/image/gradients.png b/doc/design/mkl/image/gradients.png similarity index 100% rename from doc/design/mkldnn/image/gradients.png rename to doc/design/mkl/image/gradients.png diff --git a/doc/design/mkldnn/image/layers.png b/doc/design/mkl/image/layers.png similarity index 100% rename from doc/design/mkldnn/image/layers.png rename to doc/design/mkl/image/layers.png diff --git a/doc/design/mkldnn/image/matrix.png b/doc/design/mkl/image/matrix.png similarity index 100% rename from doc/design/mkldnn/image/matrix.png rename to doc/design/mkl/image/matrix.png diff --git a/doc/design/mkldnn/image/overview.png b/doc/design/mkl/image/overview.png similarity index 100% rename from doc/design/mkldnn/image/overview.png rename to doc/design/mkl/image/overview.png diff --git a/doc/design/mkl/mkl_packed.md b/doc/design/mkl/mkl_packed.md new file mode 100644 index 0000000000000000000000000000000000000000..c07f7d0cbe9942e626bddbc37477e84e135f8e49 --- /dev/null +++ b/doc/design/mkl/mkl_packed.md @@ -0,0 +1,95 @@ +# Intel® MKL Packed on PaddlePaddle: Design Doc + + +## Contents + +- [Overview](#overview) +- [Key Points](#key-points) + - [Background](#background) + - [Solution](#solution) +- [Actions](#actions) + - [CMake](#cmake) + - [Layers](#layers) + - [Unit Tests](#unit-tests) + - [Python API](#python-api) + - [Benchmarking](#benchmarking) + + +## Overview +我们计划将 Intel® MKL 中引入的 GEMM Packed APIs\[[1](#references)\] 集成到 PaddlePaddle 中,充分发挥英特尔平台的优势,有效提升PaddlePaddle在英特尔架构上的性能。 +现阶段的优化主要针对 Recurrent Neural Network(以下简称RNN)相关层(包括`RecurrentLayer`, `GatedRecurrentLayer`和`LstmLayer`), 以及 PaddlePaddle V1 API。 + +## Key Points + +### Background +目前PaddlePaddle采用了 Intel® MKL库的[cblas_?gemm](https://software.intel.com/en-us/mkl-developer-reference-c-cblas-gemm)函数,这个函数本身会在计算前将原数据转换为更适合英特尔平台的内部格式。 + +1. 转换耗时 \ +这一数据格式的转换操作(Packing),在问题本身的计算量比较小的时候,显得相对来说较为耗时。例如在DeepSpeech2 \[[2](#references)\] 的Vanilla RNN部分中,矩阵大小是`batch_size * 2048`。 +2. 转换冗余 \ +由于在现有的某些情况下(例如RNN),多次调用 cblas_?gemm 会使用相同的原数据,因此,每次调用时对原数据的重复Packing便成为了冗余。 + +为了最大程度减少多次调用 cblas_?gemm 在Packing上的耗时,Intel® MKL 引入了以下四个API: + * cblas_?gemm_alloc + * cblas_?gemm_pack + * cblas_?gemm_compute + * cblas_?gemm_free + +通过使用这些API,我们可以先完成对原数据的Packing操作,再把已转换为Packed格式的数据传递给那些复用同一数据的gemm_compute函数,从而避免了Packing冗余。 + +### Solution +在RNN的情况下,同一次前向、后向(forward/backward)过程中所有时间步(time step)共享同一个权重(weight)。当只做推断(inference)时,各次前向之间也都使用了相同的权重,没有必要在每次前向中每个时间步的计算时对权重进行重复的Packing操作。 + +我们通过使用新引入的GEMM Packed APIs,在层初始化的时候,先完成对权重的Packing操作,然后在前向,后向时复用已经转换过的权重,并在每次权重更新后,对新的权重进行转换用于下次迭代。 + +* 优化前,对于序列长度(sequence length)为`T`的网络模型(model), `N`次迭代执行的转换次数为: + - `inference`: `N * T` + - `training`: `2 * N * T` +* 优化后,对于同样设置的网络模型,其转换次数减少至: + - `inference`: `1` + - `training`: `2 * N` + +## Actions + +添加的相关文件和目录结构如下: + +```txt +PaddlePaddle/Paddle +├── ... +└── paddle/ + ├── ... + └── gserver/ + ├── ... + ├── layers/ + │ ├── ... + │ ├── MKLPackedRecurrentLayer.* + | ├── MKLPackedGatedRecurrentLayer.* + | ├── MKLPackedLstmLayer.* + | └── MKLPackedGemm.h + └── tests/ + ├── ... + └── test_MKLPacked.cpp +``` + +### CMake +在对应的`CMakeLists.txt`中根据`WITH_MKL`是否打开,来决定是否开启MKL Packed相关功能。 + +### Layers +所有的`MKLPacked*Layer`都继承于PaddlePaddle的基类`Layer`, 并添加头文件 `MKLPackedGemm.h`,该文件对相关GEMM Packed APIs做了封装。 + +### Unit Tests +我们会添加`test_MKLPacked.cpp`用于MKL Packed优化后layer的测试。 +对于每一个新加的RNN layer,我们会对比如下2个方面: +1. 对比优化后layer自身,sequence mode(`rnn_use_batch=false`)与batch mode(`rnn_use_batch=true`)的结果。 +2. 对比优化后layer与相对应的PaddlePaddle原有layer, 在batch mode下的结果。 + +### Python API +TBD + +### Benchmarking +会添加相应的脚本用于测试和对比在使用MKL Packed recurrent layers 前后的网络性能。 + +## References +1. [Introducing the new Packed APIs for GEMM](https://software.intel.com/en-us/articles/introducing-the-new-packed-apis-for-gemm) +2. [DeepSpeech2 on PaddlePaddle](https://github.com/PaddlePaddle/DeepSpeech#deepspeech2-on-paddlepaddle) + diff --git a/doc/design/mkldnn/README.MD b/doc/design/mkl/mkldnn.md similarity index 99% rename from doc/design/mkldnn/README.MD rename to doc/design/mkl/mkldnn.md index 61d453de243c25defc56161641bc4a888a88a3b7..e2fe1e6b26ffa73fda81863abfadf697c0acbfcf 100644 --- a/doc/design/mkldnn/README.MD +++ b/doc/design/mkl/mkldnn.md @@ -208,4 +208,3 @@ if use_mkldnn 但是在PaddlePaddle中,无论是重构前的layer还是重构后的op,都不会想要知道next layer/op的信息。 4. MKL-DNN的高性能格式与PaddlePaddle原有的`NCHW`不同(PaddlePaddle中的cuDNN部分使用的也是`NCHW`,所以不存在这个问题)。 所以需要引入一个转换方法,并且只需要在必要的时候转换这种格式,才能更好的发挥MKL-DNN的性能。 - diff --git a/doc/design/paddle_nccl.md b/doc/design/paddle_nccl.md new file mode 100644 index 0000000000000000000000000000000000000000..c7dac70998a6cfec3a6d2fc72b698ff9722e6805 --- /dev/null +++ b/doc/design/paddle_nccl.md @@ -0,0 +1,65 @@ +# Design Doc: NCCL support in Paddle Fluid + +## Abstract + +This Design Doc refers to the NCCL feature in paddle. We propose an approach to support NCCL library both on a single machine and multiple machines. We wrapper the NCCL primitives `Broadcast`, `Allreduce`, `Reduce` as operators to utilize Multi-GPU powers in one script. + + +## Motivation + +[NCCL](https://developer.nvidia.com/nccl) is a NVIDIA library support Multi-GPU communicating and optimized for NVIDIA GPUs, it provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that can achieve high bandwidth over PCIe and NVLink high-speed interconnect. With NCCL library, we can easily accelerate the training in parallel. + +- Pros +1. easily plug-in with [NCCL2](https://developer.nvidia.com/nccl) library. +1. high performance in NVIDIA GPUs. +1. MPI like primitives, which have low learning cost for users. + +- Cons +1. Only design for NVIDIA GPUs, not a general multi-device solution. +1. Although NCCL1 is opensourced under BSD license, but NCCL2 is not opensourced anymore. + +At the beginning of training, the framework needs to distribute the same parameters to every GPU, and merge the gradients at any time user interests. + +As a result, during training, we need the operations of peer to peer copy between different GPUs, aggregating gradients/parameters from GPUs, and broadcasting parameters to GPUs. Every GPU only need to run the operator with correct place information. + +Besides, it needs interfaces to synchronize model update with each different GPU Cards. + +## Implementation + +As mentioned above, we wrap the NCCL routines as several kinds of operators. Need to note that NCCL need to create Communicator between gpu at the beginning, so there is a NCCLInit operator created. + +### Transpiler + +To be compatible with [parameter server design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md), the transpiler compiles the user defined operation graph into sub-graphs to be executed on different devices. + +1. The user-defined model will be a single device program + +2. Broadcast/Reduce operators between GPUs will be inserted into the program, even for the multi-node, may insert the `Send`, `Recv` operator. + + *Broadcast, AllReduce in a single machine. And Broadcast, AllReduce, [Send, Recv](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/ops/dist_train.md#graph-converter) in multiple machines* + + + +After compiling, the graph as shows + + + +Operators are added to the sub-graphs. Every GPU assigned a role of `rank0`, `rank1` etc. + +- **Broadcast**. Broadcast operator distribute initialized parameter to all the GPUs from the GPU who owns it. e.g. from`rank0` GPU. +- **AllReduce**. AllReduce operator synchronizes parameters/gradients between GPUs. AllReduce implemented in the Ring-Based communicating method, avoid of the bottle neck in a single GPU. + +Need to notice that AllReduce operator force GPUs synchronized at that point. The whole training process in asynchronous or synchronous mode depends on the AllReduce point in the graph. + +As it shown in the picture, when each GPU compute the gradient of `W`, followed with a `AllReduce` operator, accumulate the `dW` to full batch of data, then run the optimize process individually and apply the gradient to its `W`. + +- **AllReduce** + Need to note that our AllReduce operator is a ring-base AllReduce implementation. If we use the NCCL2 AllReduce primitive, every GPU optimized full batch of data, wasted (n-1) GPU compute resources. In addition, NCCL2 built-in AllReduce will only utilize the communicating resource during synchronization, then update the gradient will be a subsequent phase. In fact, we can amortize the update gradient time cost into the communicating phase. The process is +1. Every parameter has its root card. That card will responsible for aggregating the gradients from GPUs. +2. The whole model's parameter will be hashed to different root card, ensure the load balance between GPUs. +3. Logically neighberhood card will start send parameter to the next one. After one round, the parameter main card will aggregate the full gradients. +4. Then the root card will optimize the parameter. +5. This parameter card will send its optimized result to its neighberhood, then the neighberhood will send parameter to its next one. +6. Finish the sychronization round. + +The total time cost will be 2 * (n-1) * per-parameter-send-time, we reach the goal of amortize the upgrade time into communicating phase. diff --git a/doc/design/support_new_device.md b/doc/design/support_new_device.md index 92443e43927ddb184ed51199a3a8c548ae607b3f..fd23dc211a35fdc9d87bc9233fcf4e90254da748 100644 --- a/doc/design/support_new_device.md +++ b/doc/design/support_new_device.md @@ -1,33 +1,33 @@ -# Design Doc: Support new Device/Library +# Design Doc: Supporting new Device/Library ## Background -Deep learning has a high demand for computing resources. New high-performance device and computing library are coming constantly. The deep learning framework has to integrate these high-performance device and computing library flexibly. +Deep learning has a high demand for computing resources. New high-performance devices and computing libraries are appearing very frequently. Deep learning frameworks have to integrate these high-performance devices and computing libraries flexibly and efficiently. -On the one hand, hardware and computing library are not usually one-to-one coresponding relations. For example, in Intel CPU, there are Eigen and MKL computing library. And in Nvidia GPU, there are Eigen and cuDNN computing library. We have to implement specific kernels for an operator for each computing library. +On one hand, hardware and computing libraries usually do not have a one-to-one correspondence. For example,Intel CPUs support Eigen and MKL computing libraries while Nvidia GPUs support Eigen and cuDNN computing libraries. We have to implement operator specific kernels for each computing library. -On the other hand, users usually do not want to care about the low-level hardware and computing library when writing a neural network configuration. In Fluid, `Layer` is exposed in `Python`, and `Operator` is exposed in `C++`. Both `Layer` and `Operator` are independent on hardwares. +On the other hand, users usually do not want to care about the low-level hardware and computing libraries when writing a neural network configuration. In Fluid, `Layer` is exposed in `Python`, and `Operator` is exposed in `C++`. Both `Layer` and `Operator` are hardware independent. So, how to support a new Device/Library in Fluid becomes a challenge. ## Basic: Integrate A New Device/Library -For a general overview of fluid, please refer to [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md). +For a general overview of fluid, please refer to the [overview doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/read_source.md). -There are mainly there parts we have to consider in integrating a new device/library: +There are mainly three parts that we have to consider while integrating a new device/library: - Place and DeviceContext: indicates the device id and manages hardware resources - Memory and Tensor: malloc/free data on certain device -- Math Functor and OpKernel: implement computing unit on certain device/library +- Math Functor and OpKernel: implement computing unit on certain devices/libraries ### Place and DeviceContext #### Place -Fluid use class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent specific device and computing library. There are inheritance relationships between different kinds of `Place`. +Fluid uses class [Place](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/place.h#L55) to represent different devices and computing libraries. There are inheritance relationships between different kinds of `Place`. ``` | CPUPlace --> MKLDNNPlace @@ -43,7 +43,7 @@ typedef boost::variant Place; #### DeviceContext -Fluid use class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in certain hardware, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`. +Fluid uses class [DeviceContext](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/platform/device_context.h#L30) to manage the resources in different hardwares, such as CUDA stream in `CDUADeviceContext`. There are also inheritance relationships between different kinds of `DeviceContext`. ``` @@ -52,7 +52,7 @@ DeviceContext ----> CUDADeviceContext --> CUDNNDeviceContext \-> FPGADeviceContext ``` -A example of Nvidia GPU is as follows: +An example of Nvidia GPU is as follows: - DeviceContext @@ -93,7 +93,7 @@ class CUDNNDeviceContext : public CUDADeviceContext { #### memory module -Fluid provide following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36): +Fluid provides the following [memory interfaces](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/memory/memory.h#L36): ``` template @@ -106,12 +106,12 @@ template size_t Used(Place place); ``` -To implementing these interfaces, we have to implement MemoryAllocator for specific Device +To implementing these interfaces, we have to implement MemoryAllocator for different Devices #### Tensor -[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36) holds data with some shape in certain Place. +[Tensor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.h#L36) holds data with some shape in a specific Place. ```cpp class Tensor { @@ -168,7 +168,7 @@ t.mutable_data(place); ### Math Functor and OpKernel -Fluid implements computing unit based on different DeviceContext. Some computing unit is shared between operators. These common part will be put in operators/math directory as basic Functors. +Fluid implements computing units based on different DeviceContexts. Some computing units are shared between operators. This common part will be put in operators/math directory as basic Functors. Let's take [MaxOutFunctor](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/math/maxouting.h#L27) as an example: @@ -183,7 +183,7 @@ class MaxOutFunctor { }; ``` -CPU implement in .cc file +CPU implemention is in .cc file ``` template @@ -197,7 +197,7 @@ class MaxOutFunctor { }; ``` -CUDA implement in .cu file +CUDA implemention is in .cu file ``` template @@ -212,11 +212,11 @@ class MaxOutFunctor { ``` -We get computing handle from concrete DeviceContext, and make compution on tensors. +We get computing handle from a concrete DeviceContext, and make compution on tensors. -The implement of `OpKernel` is similar to math functors, the extra thing we need to do is registering the OpKernel to global map. +The implemention of `OpKernel` is similar to math functors, the extra thing we need to do is to register the OpKernel in a global map. -Fluid provides different register interface in op_registry.h +Fluid provides different register interfaces in op_registry.h Let's take [Crop](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/crop_op.cc#L134) operator as an example: @@ -240,7 +240,7 @@ REGISTER_OP_CUDA_KERNEL( ## Advanced topics: How to switch between different Device/Library -Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale in a specific Device. For example, crf operator can be only run at CPU, whereas most other operators can be run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library. +Generally, we will impelement OpKernel for all Device/Library of an Operator. We can easily train a Convolutional Neural Network in GPU. However, some OpKernel is not sutibale on a specific Device. For example, crf operator can only run on CPU, whereas most other operators can run at GPU. To achieve high performance in such circumstance, we have to switch between different Device/Library. We will discuss how to implement an efficient OpKernel switch policy. diff --git a/doc/faq/build_and_install/index_cn.rst b/doc/faq/build_and_install/index_cn.rst index f1677e216f31d79b53ac29a0afbf6fbb886a0dcd..a2bdeead7841393fdfe90c78e5b91d9e61678a24 100644 --- a/doc/faq/build_and_install/index_cn.rst +++ b/doc/faq/build_and_install/index_cn.rst @@ -14,7 +14,7 @@ $ export CUDA_SO="$(\ls usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" $ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') - $ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddlepaddle:latest-gpu + $ docker run ${CUDA_SO} ${DEVICES} -it paddlepaddle/paddle:latest-gpu 更多关于Docker的安装与使用, 请参考 `PaddlePaddle Docker 文档 `_ 。 diff --git a/doc/getstarted/build_and_install/docker_install_cn.rst b/doc/getstarted/build_and_install/docker_install_cn.rst index f78b1fb0e11aa028a4b7abb5270740b97f8039e9..1eb06e4182d40c3be20d71e37b34009905eaf9d6 100644 --- a/doc/getstarted/build_and_install/docker_install_cn.rst +++ b/doc/getstarted/build_and_install/docker_install_cn.rst @@ -114,7 +114,7 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note .. code-block:: bash - nvidia-docker run -it -v $PWD:/work paddledev/paddle:latest-gpu /bin/bash + nvidia-docker run -it -v $PWD:/work paddlepaddle/paddle:latest-gpu /bin/bash **注: 如果没有安装nvidia-docker,可以尝试以下的方法,将CUDA库和Linux设备挂载到Docker容器内:** @@ -122,7 +122,7 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') - docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:latest-gpu + docker run ${CUDA_SO} ${DEVICES} -it paddlepaddle/paddle:latest-gpu **关于AVX:** diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index d7acc7aeb744b19d83acb520d07c8551168dd096..5a46c598f2248c7912169a9e77b16851230c1d2e 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -122,7 +122,7 @@ GPU driver installed before move on. .. code-block:: bash - nvidia-docker run -it -v $PWD:/work paddledev/paddle:latest-gpu /bin/bash + nvidia-docker run -it -v $PWD:/work paddlepaddle/paddle:latest-gpu /bin/bash **NOTE: If you don't have nvidia-docker installed, try the following method to mount CUDA libs and devices into the container.** @@ -130,7 +130,7 @@ GPU driver installed before move on. export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') - docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:latest-gpu + docker run ${CUDA_SO} ${DEVICES} -it paddlepaddle/paddle:latest-gpu **About AVX:** diff --git a/doc/getstarted/concepts/src/infer.py b/doc/getstarted/concepts/src/infer.py new file mode 100644 index 0000000000000000000000000000000000000000..4cc58dfee0bd6dade0340b4fd0ee1adb49ffebf6 --- /dev/null +++ b/doc/getstarted/concepts/src/infer.py @@ -0,0 +1,18 @@ +import paddle.v2 as paddle +import numpy as np + +paddle.init(use_gpu=False) +x = paddle.layer.data(name='x', type=paddle.data_type.dense_vector(2)) +y_predict = paddle.layer.fc(input=x, size=1, act=paddle.activation.Linear()) + +# loading the model which generated by training +with open('params_pass_90.tar', 'r') as f: + parameters = paddle.parameters.Parameters.from_tar(f) + +# Input multiple sets of data,Output the infer result in a array. +i = [[[1, 2]], [[3, 4]], [[5, 6]]] +print paddle.infer(output_layer=y_predict, parameters=parameters, input=i) +# Will print: +# [[ -3.24491572] +# [ -6.94668722] +# [-10.64845848]] diff --git a/doc/getstarted/concepts/src/train.py b/doc/getstarted/concepts/src/train.py index 8aceb23406a476f08639cc6223cdf730b728a705..4bccbfca3c70c12aec564e2cae3b8ca174b68777 100644 --- a/doc/getstarted/concepts/src/train.py +++ b/doc/getstarted/concepts/src/train.py @@ -26,6 +26,11 @@ def event_handler(event): if event.batch_id % 1 == 0: print "Pass %d, Batch %d, Cost %f" % (event.pass_id, event.batch_id, event.cost) + # product model every 10 pass + if isinstance(event, paddle.event.EndPass): + if event.pass_id % 10 == 0: + with open('params_pass_%d.tar' % event.pass_id, 'w') as f: + trainer.save_parameter_to_tar(f) # define training dataset reader diff --git a/doc/getstarted/concepts/use_concepts_cn.rst b/doc/getstarted/concepts/use_concepts_cn.rst index c243083794bb3c4659242de99b3b2715af9d7c24..e695ff283e2e806377a51c559b37e8068360a4ff 100644 --- a/doc/getstarted/concepts/use_concepts_cn.rst +++ b/doc/getstarted/concepts/use_concepts_cn.rst @@ -147,4 +147,9 @@ PaddlePaddle支持不同类型的输入数据,主要包括四种类型,和 .. literalinclude:: src/train.py :linenos: +使用以上训练好的模型进行预测,取其中一个模型params_pass_90.tar,输入需要预测的向量组,然后打印输出: + +.. literalinclude:: src/infer.py + :linenos: + 有关线性回归的实际应用,可以参考PaddlePaddle book的 `第一章节 `_。 diff --git a/doc/mobile/cross_compiling_for_ios_cn.md b/doc/mobile/cross_compiling_for_ios_cn.md index 9da48e7f2119ce901fbb3abab73400df27be16d2..d5196d9a4c93c7692d2a624ec7d0650e32806338 100644 --- a/doc/mobile/cross_compiling_for_ios_cn.md +++ b/doc/mobile/cross_compiling_for_ios_cn.md @@ -18,11 +18,11 @@ PaddlePaddle为交叉编译提供了工具链配置文档[cmake/cross_compiling/ - `CMAKE_SYSTEM_NAME`,CMake编译的目标平台,必须设置为`iOS`。在设置`CMAKE_SYSTEM_NAME=iOS`后,PaddlePaddle的CMake系统会自动编译所有的第三方依赖库,并且强制设置一些PaddlePaddle参数的值(`WITH_C_API=ON`、`WITH_GPU=OFF`、`WITH_AVX=OFF`、`WITH_PYTHON=OFF`、`WITH_RDMA=OFF`)。 - `WITH_C_API`,是否编译C-API预测库,必须设置为ON。在iOS平台上只支持使用C-API来预测。 -- `WITH_SWIG_PY`,必须设置为ON。在iOS平台上不支持通过swig调用来训练或者预测。 +- `WITH_SWIG_PY`,必须设置为`OFF`。在iOS平台上不支持通过swig调用来训练或者预测。 iOS平台可选配置参数: -- `IOS_PLATFORM`,可设置为`OS/SIMULATOR`,默认值为`OS`。 +- `IOS_PLATFORM`,可设置为`OS`(默认值)或`SIMULATOR`。 - `OS`,构建目标为`arm`架构的iPhone或者iPad等物理设备。 - `SIMULATOR`,构建目标为`x86`架构的模拟器平台。 - `IOS_ARCH`,目标架构。针对不同的`IOS_PLATFORM`,可设置的目标架构如下表所示,默认编译所有架构: diff --git a/doc/mobile/cross_compiling_for_ios_en.md b/doc/mobile/cross_compiling_for_ios_en.md new file mode 100644 index 0000000000000000000000000000000000000000..aa390cd61f3fbd75e5a3b342f3559e76da35a918 --- /dev/null +++ b/doc/mobile/cross_compiling_for_ios_en.md @@ -0,0 +1,120 @@ +# PaddlePaddle Compiling Guide for iOS + +This tutorial will walk you through cross compiling the PaddlePaddle library for iOS from the source in MacOS. + +## Preparation + +Apple provides Xcode for cross-compiling and IDE for iOS development. Download from App store or [here](https://developer.apple.com/cn/xcode/). To verify your installation, run command as follows + +```bash +$ xcodebuild -version +Xcode 9.0 +Build version 9A235 +``` + +## Cross-compiling configurations + +PaddlePaddle provides cross-compiling toolchain configuration documentation [cmake/cross_compiling/ios.cmake](https://github.com/PaddlePaddle/Paddle/blob/develop/cmake/cross_compiling/ios.cmake), which has some default settings for frequently used compilers. + +There are some mandatory environment variables need to be set before cross compiling PaddlePaddle for iOS: + +- `CMAKE_SYSTEM_NAME`, CMake compiling target platform name, has to be `iOS`. PaddlePaddle CMake will compile all the third party dependencies and enforce some parameters (`WITH_C_API=ON`, `WITH_GPU=OFF`, `WITH_AVX=OFF`, `WITH_PYTHON=OFF`,`WITH_RDMA=OFF`) when this variable is set with value `iOS`. + +- `WITH_C_API`, Whether to compile inference C-API library, has to be `ON`, since C-API is the only supported interface for inferencing in iOS. +- `WITH_SWIG_PY`, has to be `OFF`. It's not supported to inference or train via swig in iOS. + +Optional environment variables for iOS are: + +- `IOS_PLATFORM`, either `OS` (default) or `SIMULATOR`. + - `OS`, build targets ARM-based physical devices like iPhone or iPad. + - `SIMULATOR`, build targets x86 architecture simulators. +- `IOS_ARCH`, target architecture. By default, all architecture types will be compiled. If you need to specify the architecture to compile for, please find valid values for different `IOS_PLATFORM` settings from the table below: + + + + + + + + + + + + + + + + + + + + + + +
IOS_PLATFORMIOS_ARCH
OSarmv7, armv7s, arm64
SIMULATORi386, x86_64
+ +- `IOS_DEPLOYMENT_TARGET`, minimum iOS version to deployment, `7.0` by default. +- `IOS_ENABLE_BITCODE`, whether to enable [Bitcode](https://developer.apple.com/library/content/documentation/IDEs/Conceptual/AppDistributionGuide/AppThinning/AppThinning.html#//apple_ref/doc/uid/TP40012582-CH35-SW3), values can be `ON/OFF`, `ON` by default. +- `IOS_USE_VECLIB_FOR_BLAS`, whether to use [vecLib](https://developer.apple.com/documentation/accelerate/veclib) framework for BLAS computing. values can be `ON/OFF`, `OFF` by default. +- `IOS_DEVELOPMENT_ROOT`, the path to `Developer` directory, can be explicitly set with your `/path/to/platform/Developer`. If left blank, PaddlePaddle will automatically pick the Xcode corresponding `platform`'s `Developer` directory based on your `IOS_PLATFORM` value. +- `IOS_SDK_ROOT`, the path to `SDK` root, can be explicitly set with your `/path/to/platform/Developer/SDKs/SDK`. if left black, PaddlePaddle will pick the latest SDK in the directory of `IOS_DEVELOPMENT_ROOT`. + +other settings: + +- `USE_EIGEN_FOR_BLAS`, whether to use Eigen for matrix computing. effective when `IOS_USE_VECLIB_FOR_BLAS=OFF`. Values can be `ON/OFF`, `OFF` by default. +- `HOST_C/CXX_COMPILER`, host C/C++ compiler. Uses value from environment variable `CC/CXX` by default or `cc/c++` if `CC/CXX` doesn't exist. + +some typical cmake configurations: + +```bash +cmake -DCMAKE_SYSTEM_NAME=iOS \ + -DIOS_PLATFORM=OS \ + -DIOS_ARCH="armv7;arm64" \ + -DIOS_ENABLE_BITCODE=ON \ + -DIOS_USE_VECLIB_FOR_BLAS=ON \ + -DCMAKE_INSTALL_PREFIX=your/path/to/install \ + -DWITH_C_API=ON \ + -DWITH_TESTING=OFF \ + -DWITH_SWIG_PY=OFF \ + .. +``` + +```bash +cmake -DCMAKE_SYSTEM_NAME=iOS \ + -DIOS_PLATFORM=SIMULATOR \ + -DIOS_ARCH="x86_64" \ + -DIOS_USE_VECLIB_FOR_BLAS=ON \ + -DCMAKE_INSTALL_PREFIX=your/path/to/install \ + -DWITH_C_API=ON \ + -DWITH_TESTING=OFF \ + -DWITH_SWIG_PY=OFF \ + .. +``` + +You can set other compiling parameters for your own need. I.E. if you are trying to minimize the library size, set `CMAKE_BUILD_TYPE` with `MinSizeRel`; or if the performance is your concern, set `CMAKE_BUILD_TYPE` with `Release`. You can even manipulate the PaddlePaddle compiling procedure by manually set `CMAKE_C/CXX_FLAGS` values. + +**TIPS for a better performance**: + +- set `CMAKE_BUILD_TYPE` with `Release` +- set `IOS_USE_VECLIB_FOR_BLAS` with `ON` + +## Compile and install + +After CMake, run following commands, PaddlePaddle will download the compile 3rd party dependencies, compile and install PaddlePaddle inference library. + +``` +$ make +$ make install +``` + +Please Note: if you compiled PaddlePaddle in the source directory for other platforms, do remove `third_party` and `build` directory within the source with `rm -rf` to ensure that all the 3rd party libraries dependencies and PaddlePaddle is newly compiled with current CMake configuration. + +`your/path/to/install` directory will have following directories after `compile` and `install`: + +- `include`, contains all the C-API header files. +- `lib`, contains PaddlePaddle C-API static library. +- `third_party` contains all the 3rd party libraries. + +Please note: if PaddlePaddle library need to support both physical devices and simulators, you will need to compile correspondingly, then merge fat library with `lipo`. + +Now you will have PaddlePaddle library compiled and installed, the fat library can be used in deep learning related iOS APPs. Please refer to C-API documentation for usage guides. diff --git a/doc/mobile/index_en.rst b/doc/mobile/index_en.rst index 3c08d736717cfe8d5fdf449dc58015086befbe60..ef421dacad458828cadf8cf505375d6c4bfd9dde 100644 --- a/doc/mobile/index_en.rst +++ b/doc/mobile/index_en.rst @@ -5,4 +5,5 @@ MOBILE :maxdepth: 1 cross_compiling_for_android_en.md + cross_compiling_for_ios_en.md cross_compiling_for_raspberry_en.md diff --git a/paddle/capi/error.cpp b/paddle/capi/error.cpp index 169b65f92104336d9ec12e2a5a6778db25080270..96ce31b45fc3f83237146443cbe4289af7bfa239 100644 --- a/paddle/capi/error.cpp +++ b/paddle/capi/error.cpp @@ -14,7 +14,7 @@ limitations under the License. */ #include "error.h" -const char* paddle_error_string(paddle_error err) { +extern "C" const char* paddle_error_string(paddle_error err) { switch (err) { case kPD_NULLPTR: return "nullptr error"; diff --git a/paddle/capi/error.h b/paddle/capi/error.h index 9d9d0ed63a5276c6b9a8747e1ee1fce6872bdc9e..2da9e0a3ef604fbcd53bac271c72ef33b3105152 100644 --- a/paddle/capi/error.h +++ b/paddle/capi/error.h @@ -29,9 +29,17 @@ typedef enum { kPD_UNDEFINED_ERROR = -1, } paddle_error; +#ifdef __cplusplus +extern "C" { +#endif + /** * Error string for Paddle API. */ PD_API const char* paddle_error_string(paddle_error err); +#ifdef __cplusplus +} +#endif + #endif diff --git a/paddle/framework/backward.cc b/paddle/framework/backward.cc index a17036c6527da3a4a32f021a57542b6b6d68a395..faf6e60cbd1bcda9864c12696b336998ea7606b7 100644 --- a/paddle/framework/backward.cc +++ b/paddle/framework/backward.cc @@ -430,14 +430,14 @@ std::vector> MakeBlockBackward( std::vector> op_grads; if ((*it)->Type() == "recurrent" || (*it)->Type() == "while") { - int step_block_idx = (*it)->GetBlockAttr("step_block"); + int step_block_idx = (*it)->GetBlockAttr("sub_block"); BlockDescBind* backward_block = CreateStepBlock( program_desc, no_grad_vars, grad_to_var, step_block_idx); op_grads = MakeOpGrad(*it, no_grad_vars, grad_to_var, {backward_block}); } else if ((*it)->Type() == "conditional_block") { BlockDescBind* backward_block = CreateStepBlock(program_desc, no_grad_vars, grad_to_var, - (*it)->GetBlockAttr("block")); + (*it)->GetBlockAttr("sub_block")); op_grads = MakeOpGrad(*it, no_grad_vars, grad_to_var, {backward_block}); } else { op_grads = MakeOpGrad(*it, no_grad_vars, grad_to_var); diff --git a/paddle/function/GemmConvOp.cpp b/paddle/function/GemmConvOp.cpp index 8d34eee886a6202691e5dec2ab62e7c5b0ac7fb1..3387cae3962baa1b10885be753d9ec92284b31f8 100644 --- a/paddle/function/GemmConvOp.cpp +++ b/paddle/function/GemmConvOp.cpp @@ -126,6 +126,11 @@ public: inputData += inputChannels * inputHeight * inputWidth; outputData += outputChannels * outputHeight * outputWidth; } +#ifdef PADDLE_MOBILE_INFERENCE + if (Device == DEVICE_TYPE_CPU) { + delete memory_; + } +#endif } }; diff --git a/paddle/gserver/layers/ROIPoolLayer.cpp b/paddle/gserver/layers/ROIPoolLayer.cpp index 2c8256b91c97b513ce7237b8174c522430094926..7d7c30b4d89e2dd137e7fc7de3159c07bbab9fb4 100644 --- a/paddle/gserver/layers/ROIPoolLayer.cpp +++ b/paddle/gserver/layers/ROIPoolLayer.cpp @@ -84,12 +84,15 @@ void ROIPoolLayer::forward(PassType passType) { size_t poolChannelOffset = pooledHeight_ * pooledWidth_; real* outputData = outputValue->getData(); - Matrix::resizeOrCreate(maxIdxs_, - numROIs, - channels_ * pooledHeight_ * pooledWidth_, - false, - false); - real* argmaxData = maxIdxs_->getData(); + real* argmaxData = nullptr; + if (passType != PASS_TEST) { + Matrix::resizeOrCreate(maxIdxs_, + numROIs, + channels_ * pooledHeight_ * pooledWidth_, + false, + false); + argmaxData = maxIdxs_->getData(); + } for (size_t n = 0; n < numROIs; ++n) { // the first five elememts of each RoI should be: @@ -128,14 +131,18 @@ void ROIPoolLayer::forward(PassType passType) { bool isEmpty = (hend <= hstart) || (wend <= wstart); size_t poolIndex = ph * pooledWidth_ + pw; outputData[poolIndex] = isEmpty ? 0 : -FLT_MAX; - argmaxData[poolIndex] = -1; + if (argmaxData) { + argmaxData[poolIndex] = -1; + } for (size_t h = hstart; h < hend; ++h) { for (size_t w = wstart; w < wend; ++w) { size_t index = h * width_ + w; if (batchData[index] > outputData[poolIndex]) { outputData[poolIndex] = batchData[index]; - argmaxData[poolIndex] = index; + if (argmaxData) { + argmaxData[poolIndex] = index; + } } } } @@ -143,7 +150,9 @@ void ROIPoolLayer::forward(PassType passType) { } batchData += channelOffset; outputData += poolChannelOffset; - argmaxData += poolChannelOffset; + if (argmaxData) { + argmaxData += poolChannelOffset; + } } bottomROIs += roiOffset; } diff --git a/paddle/math/float16.h b/paddle/math/float16.h index 76ad3a01239e409caeefc36a3d562ed5e388dc92..efebbce50405018c6b7ce2049f8d55c33680469f 100644 --- a/paddle/math/float16.h +++ b/paddle/math/float16.h @@ -79,7 +79,7 @@ public: #ifdef PADDLE_CUDA_FP16 HOSTDEVICE inline explicit float16(const half& h) { #if CUDA_VERSION >= 9000 - x = reinterpret_cast<__half_raw*>(&h)->x; + x = reinterpret_cast<__half_raw*>(const_cast(&h))->x; #else x = h.x; #endif // CUDA_VERSION >= 9000 @@ -145,7 +145,7 @@ public: #ifdef PADDLE_CUDA_FP16 HOSTDEVICE inline float16& operator=(const half& rhs) { #if CUDA_VERSION >= 9000 - x = reinterpret_cast<__half_raw*>(&rhs)->x; + x = reinterpret_cast<__half_raw*>(const_cast(&rhs))->x; #else x = rhs.x; #endif diff --git a/paddle/memory/detail/system_allocator.cc b/paddle/memory/detail/system_allocator.cc index 6a815a1b57db1d833781ca224f34e4559af9b9a5..509250debc2b2fd2e87078ab5f233ae2db6fd898 100644 --- a/paddle/memory/detail/system_allocator.cc +++ b/paddle/memory/detail/system_allocator.cc @@ -19,6 +19,7 @@ limitations under the License. */ #include // for malloc and free #include // for mlock and munlock +#include // for std::max #include "gflags/gflags.h" @@ -28,7 +29,7 @@ limitations under the License. */ // of memory available to the system for paging. So, by default, we // should set false to use_pinned_memory. DEFINE_bool(use_pinned_memory, true, "If set, allocate cpu pinned memory."); - +DECLARE_double(fraction_of_gpu_memory_to_use); namespace paddle { namespace memory { namespace detail { @@ -77,45 +78,20 @@ void* GPUAllocator::Alloc(size_t& index, size_t size) { // CUDA documentation doesn't explain if cudaMalloc returns nullptr // if size is 0. We just make sure it does. if (size <= 0) return nullptr; - - size_t available = 0; - size_t capacity = 0; - paddle::platform::GpuMemoryUsage(available, capacity); - - // Reserve memory for page tables, etc. - size_t reserving = 0.05 * capacity + paddle::platform::GpuMinChunkSize(); - size_t usable = available > reserving ? available - reserving : 0; - - // If remaining size no less than expected size, using general - // cudaMalloc to allocate GPU memory. - void* p = 0; - if (size <= usable) { - cudaError_t result = cudaMalloc(&p, size); - if (result == cudaSuccess) { - index = 0; - gpu_alloc_size_ += size; - return p; - } - } - - // If remaining size less than expected size or cudaMalloc failed, - // cudaMallocHost will be considered as a fallback allocator. - // - // NOTE: here, we use GpuMaxAllocSize() as the maximum memory size - // of host fallback allocation. Allocates too much would reduce - // the amount of memory available to the underlying system for paging. - usable = paddle::platform::GpuMaxAllocSize() - fallback_alloc_size_; - - if (size > usable) return nullptr; - - cudaError_t result = cudaMallocHost(&p, size); + void* p; + cudaError_t result = cudaMalloc(&p, size); if (result == cudaSuccess) { - index = 1; - fallback_alloc_size_ += size; + index = 0; + gpu_alloc_size_ += size; return p; + } else { + LOG(WARNING) + << "Cannot malloc " << size / 1024.0 / 1024.0 + << " MB GPU memory. Please shrink FLAGS_fraction_of_gpu_memory_to_use " + "environment variable to a lower value. Current value is " + << FLAGS_fraction_of_gpu_memory_to_use; + return nullptr; } - - return nullptr; } void GPUAllocator::Free(void* p, size_t size, size_t index) { diff --git a/paddle/operators/chunk_eval_op.cc b/paddle/operators/chunk_eval_op.cc index 94127ab33e51d5529b63b5e3696032ef8adcf03e..894f355deb9d764ef72d452f362e6b42f8831667 100644 --- a/paddle/operators/chunk_eval_op.cc +++ b/paddle/operators/chunk_eval_op.cc @@ -32,6 +32,13 @@ class ChunkEvalOp : public framework::OperatorWithKernel { "Output(Recall) of ChunkEvalOp should not be null."); PADDLE_ENFORCE(ctx->HasOutput("F1-Score"), "Output(F1-Score) of ChunkEvalOp should not be null."); + PADDLE_ENFORCE(ctx->HasOutput("NumInferChunks"), + "Output(NumInferChunks) of ChunkEvalOp should not be null."); + PADDLE_ENFORCE(ctx->HasOutput("NumLabelChunks"), + "Output(NumLabelChunks) of ChunkEvalOp should not be null."); + PADDLE_ENFORCE( + ctx->HasOutput("NumCorrectChunks"), + "Output(NumCorrectChunks) of ChunkEvalOp should not be null."); auto inference_dim = ctx->GetInputDim("Inference"); auto label_dim = ctx->GetInputDim("Label"); @@ -42,6 +49,9 @@ class ChunkEvalOp : public framework::OperatorWithKernel { ctx->SetOutputDim("Precision", {1}); ctx->SetOutputDim("Recall", {1}); ctx->SetOutputDim("F1-Score", {1}); + ctx->SetOutputDim("NumInferChunks", {1}); + ctx->SetOutputDim("NumLabelChunks", {1}); + ctx->SetOutputDim("NumCorrectChunks", {1}); } protected: @@ -70,6 +80,16 @@ class ChunkEvalOpMaker : public framework::OpProtoAndCheckerMaker { "sensitivity) of chunks on the given mini-batch."); AddOutput("F1-Score", "(float). The evaluated F1-Score on the given mini-batch."); + AddOutput("NumInferChunks", + "(int64_t). The number of chunks in Inference on the given " + "mini-batch."); + AddOutput( + "NumLabelChunks", + "(int64_t). The number of chunks in Label on the given mini-batch."); + AddOutput( + "NumCorrectChunks", + "(int64_t). The number of chunks both in Inference and Label on the " + "given mini-batch."); AddAttr("num_chunk_types", "(int). The number of chunk type. See below for details."); AddAttr( diff --git a/paddle/operators/chunk_eval_op.h b/paddle/operators/chunk_eval_op.h index 9cd758a8253914515437b480e17a94d5d6b21fd2..74ab435c860b22b2ee3f485743540976a7a31b96 100644 --- a/paddle/operators/chunk_eval_op.h +++ b/paddle/operators/chunk_eval_op.h @@ -111,9 +111,7 @@ class ChunkEvalKernel : public framework::OpKernel { std::vector label_segments; std::vector output_segments; std::set excluded_chunk_types; - int64_t num_output_segments = 0; - int64_t num_label_segments = 0; - int64_t num_correct = 0; + if (context.Attr("chunk_scheme") == "IOB") { num_tag_types = 2; tag_begin = 0; @@ -151,12 +149,24 @@ class ChunkEvalKernel : public framework::OpKernel { auto* precision = context.Output("Precision"); auto* recall = context.Output("Recall"); auto* f1 = context.Output("F1-Score"); + auto* num_infer_chunks = context.Output("NumInferChunks"); + auto* num_label_chunks = context.Output("NumLabelChunks"); + auto* num_correct_chunks = context.Output("NumCorrectChunks"); const int64_t* inference_data = inference->data(); const int64_t* label_data = label->data(); T* precision_data = precision->mutable_data(context.GetPlace()); T* racall_data = recall->mutable_data(context.GetPlace()); T* f1_data = f1->mutable_data(context.GetPlace()); + int64_t* num_infer_chunks_data = + num_infer_chunks->mutable_data(context.GetPlace()); + int64_t* num_label_chunks_data = + num_label_chunks->mutable_data(context.GetPlace()); + int64_t* num_correct_chunks_data = + num_correct_chunks->mutable_data(context.GetPlace()); + *num_infer_chunks_data = 0; + *num_label_chunks_data = 0; + *num_correct_chunks_data = 0; auto lod = label->lod(); PADDLE_ENFORCE_EQ(lod.size(), 1UL, "Only support one level sequence now."); @@ -166,17 +176,23 @@ class ChunkEvalKernel : public framework::OpKernel { for (int i = 0; i < num_sequences; ++i) { int seq_length = lod[0][i + 1] - lod[0][i]; EvalOneSeq(inference_data + lod[0][i], label_data + lod[0][i], seq_length, - output_segments, label_segments, num_output_segments, - num_label_segments, num_correct, num_chunk_types, - num_tag_types, other_chunk_type, tag_begin, tag_inside, - tag_end, tag_single, excluded_chunk_types); + output_segments, label_segments, *num_infer_chunks_data, + *num_label_chunks_data, *num_correct_chunks_data, + num_chunk_types, num_tag_types, other_chunk_type, tag_begin, + tag_inside, tag_end, tag_single, excluded_chunk_types); } - *precision_data = !num_output_segments ? 0 : static_cast(num_correct) / - num_output_segments; - *racall_data = !num_label_segments ? 0 : static_cast(num_correct) / - num_label_segments; - *f1_data = !num_correct ? 0 : 2 * (*precision_data) * (*racall_data) / - ((*precision_data) + (*racall_data)); + *precision_data = !(*num_infer_chunks_data) + ? 0 + : static_cast(*num_correct_chunks_data) / + (*num_infer_chunks_data); + *racall_data = !(*num_label_chunks_data) + ? 0 + : static_cast(*num_correct_chunks_data) / + (*num_label_chunks_data); + *f1_data = !(*num_correct_chunks_data) + ? 0 + : 2 * (*precision_data) * (*racall_data) / + ((*precision_data) + (*racall_data)); } void EvalOneSeq(const int64_t* output, const int64_t* label, int length, diff --git a/paddle/operators/conditional_block_op.cc b/paddle/operators/conditional_block_op.cc index 03c58a7eab8b2071a3a0b75ac0c665e32ef39876..6f2ef9174e84a0c0ae096956c04039435e6583c6 100644 --- a/paddle/operators/conditional_block_op.cc +++ b/paddle/operators/conditional_block_op.cc @@ -65,7 +65,7 @@ class ConditionalBlockOp : public ConditionalOp { scopes->front() = &scope.NewScope(); auto &cur_scope = *scopes->front(); - auto *block = Attr("block"); + auto *block = Attr("sub_block"); framework::Executor exec(dev_ctx); exec.Run(*block->Program(), &cur_scope, block->ID(), false); } @@ -88,7 +88,7 @@ class ConditionalBlockOpProtoMaker : public framework::OpProtoAndCheckerMaker { "unify the conditional block, rnn and while op, the type of " "scope is std::vector"); AddAttr( - "block", "The step block of conditional block operator"); + "sub_block", "The step block of conditional block operator"); AddComment(R"DOC(Conditional block operator Run the sub-block if X is not empty. Params is the other inputs and Out is the @@ -117,7 +117,7 @@ class ConditionalBlockGradOp : public ConditionalOp { auto &scopes = scope_var->Get>(); framework::Scope &cur_scope = *scopes[0]; - auto *block = Attr("block"); + auto *block = Attr("sub_block"); framework::Executor exec(dev_ctx); exec.Run(*block->Program(), &cur_scope, block->ID(), false); @@ -181,7 +181,7 @@ class ConditionalBlockGradMaker : public framework::SingleGradOpDescMaker { grad_op->SetInput("Scope", Output("Scope")); grad_op->SetOutput(framework::GradVarName("X"), InputGrad("X")); grad_op->SetOutput(framework::GradVarName("Params"), InputGrad("Params")); - grad_op->SetBlockAttr("block", *this->grad_block_[0]); + grad_op->SetBlockAttr("sub_block", *this->grad_block_[0]); return std::unique_ptr(grad_op); } }; diff --git a/paddle/operators/conv_op.h b/paddle/operators/conv_op.h index 749258183ba058cf0ed8d91c4406813694314b85..d2de4e80f751d4938ac9cad60871b470fccf225c 100644 --- a/paddle/operators/conv_op.h +++ b/paddle/operators/conv_op.h @@ -261,8 +261,12 @@ class GemmConvGradKernel : public framework::OpKernel { if (input_grad) { input_grad->mutable_data(context.GetPlace()); - set_zero(dev_ctx, input_grad, static_cast(0)); + // if is_expand is false, the operation of set_zero is unnecessary, + // because math::matmul will reset input_grad. + if (is_expand) { + set_zero(dev_ctx, input_grad, static_cast(0)); + } math::Col2VolFunctor col2vol; math::Col2ImFunctor col2im; diff --git a/paddle/operators/conv_transpose_op.h b/paddle/operators/conv_transpose_op.h index 80600b53614994ba0c740aed0d75c9944333fecc..1171b0435fd2b1abe541043e8283a8fc09dc13c7 100644 --- a/paddle/operators/conv_transpose_op.h +++ b/paddle/operators/conv_transpose_op.h @@ -225,7 +225,6 @@ class GemmConvTransposeGradKernel : public framework::OpKernel { if (input_grad) { input_grad->mutable_data(context.GetPlace()); - set_zero(dev_ctx, input_grad, static_cast(0)); } if (filter_grad) { // filter size (m, c, k_h, k_w) filter_grad->mutable_data(context.GetPlace()); diff --git a/paddle/operators/math/math_function.cc b/paddle/operators/math/math_function.cc index 2b35e4532a9c9f72f473020d472244234af24248..a05810d7781f5286e70b53005ef0b193c945c54c 100644 --- a/paddle/operators/math/math_function.cc +++ b/paddle/operators/math/math_function.cc @@ -277,6 +277,14 @@ void set_constant_with_place( TensorSetConstantCPU(tensor, value)); } +template <> +void set_constant_with_place( + const platform::DeviceContext& context, framework::Tensor* tensor, + float value) { + framework::VisitDataType(framework::ToDataType(tensor->type()), + TensorSetConstantCPU(tensor, value)); +} + struct TensorSetConstantWithPlace : public boost::static_visitor { TensorSetConstantWithPlace(const platform::DeviceContext& context, framework::Tensor* tensor, float value) diff --git a/paddle/operators/math/math_function.cu b/paddle/operators/math/math_function.cu index 1b560a7e2d29c1b63a25d4ec9bbd82d5960a279d..e33070c40fbfa7f2794426247ef77b8fcaee4ec6 100644 --- a/paddle/operators/math/math_function.cu +++ b/paddle/operators/math/math_function.cu @@ -273,6 +273,13 @@ void set_constant_with_place( TensorSetConstantGPU(context, tensor, value)); } +template <> +void set_constant_with_place( + const platform::DeviceContext& context, framework::Tensor* tensor, + float value) { + set_constant_with_place(context, tensor, value); +} + template struct RowwiseAdd; template struct RowwiseAdd; template struct ColwiseSum; diff --git a/paddle/operators/recurrent_op.cc b/paddle/operators/recurrent_op.cc index 29f91636438449f90ea3ffee8adc21595aabe202..232d926f7b975c3b8ebecad983d0f1cc54b9486f 100644 --- a/paddle/operators/recurrent_op.cc +++ b/paddle/operators/recurrent_op.cc @@ -25,7 +25,7 @@ constexpr char kOutputs[] = "outputs"; constexpr char kStepScopes[] = "step_scopes"; constexpr char kExStates[] = "ex_states"; constexpr char kStates[] = "states"; -constexpr char kStepBlock[] = "step_block"; +constexpr char kStepBlock[] = "sub_block"; constexpr char kReverse[] = "reverse"; constexpr char kIsTrain[] = "is_train"; #define GRAD_SUFFIX "@GRAD" diff --git a/paddle/operators/reduce_op.cc b/paddle/operators/reduce_op.cc index b754637bf29225615f129d7423d60518e053ca18..fedc2a5c37ff84ffdf8ebd2f19296db92e256e5b 100644 --- a/paddle/operators/reduce_op.cc +++ b/paddle/operators/reduce_op.cc @@ -37,18 +37,23 @@ class ReduceOp : public framework::OperatorWithKernel { PADDLE_ENFORCE_LT( dim, x_rank, "The dim should be in the range [-rank(input), rank(input))."); - bool keep_dim = ctx->Attrs().Get("keep_dim"); - auto dims_vector = vectorize(x_dims); - if (keep_dim || x_rank == 1) { - dims_vector[dim] = 1; + bool reduce_all = ctx->Attrs().Get("reduce_all"); + if (reduce_all) { + ctx->SetOutputDim("Out", {1}); } else { - dims_vector.erase(dims_vector.begin() + dim); - } - auto out_dims = framework::make_ddim(dims_vector); - ctx->SetOutputDim("Out", out_dims); - if (dim != 0) { - // Only pass LoD when not reducing on the first dim. - ctx->ShareLoD("X", /*->*/ "Out"); + bool keep_dim = ctx->Attrs().Get("keep_dim"); + auto dims_vector = vectorize(x_dims); + if (keep_dim || x_rank == 1) { + dims_vector[dim] = 1; + } else { + dims_vector.erase(dims_vector.begin() + dim); + } + auto out_dims = framework::make_ddim(dims_vector); + ctx->SetOutputDim("Out", out_dims); + if (dim != 0) { + // Only pass LoD when not reducing on the first dim. + ctx->ShareLoD("X", /*->*/ "Out"); + } } } }; @@ -95,11 +100,16 @@ class ReduceOpMaker : public framework::OpProtoAndCheckerMaker { "(bool, default false) " "If true, retain the reduced dimension with length 1.") .SetDefault(false); + AddAttr("reduce_all", + "(bool, default false) " + "If true, output a scalar reduced along all dimensions.") + .SetDefault(false); comment_ = R"DOC( {ReduceOp} Operator. This operator computes the {reduce} of input tensor along the given dimension. The result tensor has 1 fewer dimension than the input unless keep_dim is true. +If reduce_all is true, just reduce along all dimensions and output a scalar. )DOC"; AddComment(comment_); diff --git a/paddle/operators/reduce_op.h b/paddle/operators/reduce_op.h index 47ce910f2821467c701a7f5e22a8dbe5c8c95c92..7bd99cb1e6d532963ef648202f460f363baad9b5 100644 --- a/paddle/operators/reduce_op.h +++ b/paddle/operators/reduce_op.h @@ -26,10 +26,12 @@ using DDim = framework::DDim; template using EigenTensor = framework::EigenTensor; - template using EigenScalar = framework::EigenScalar; +template +using EigenVector = framework::EigenVector; struct SumFunctor { template @@ -95,26 +97,41 @@ template class ReduceKernel : public framework::OpKernel { public: void Compute(const framework::ExecutionContext& context) const override { - int rank = context.Input("X")->dims().size(); - switch (rank) { - case 1: - ReduceCompute<1>(context); - break; - case 2: - ReduceCompute<2>(context); - break; - case 3: - ReduceCompute<3>(context); - break; - case 4: - ReduceCompute<4>(context); - break; - case 5: - ReduceCompute<5>(context); - break; - case 6: - ReduceCompute<6>(context); - break; + bool reduce_all = context.Attr("reduce_all"); + if (reduce_all) { + // Flatten and reduce 1-D tensor + auto* input = context.Input("X"); + auto* output = context.Output("Out"); + output->mutable_data(context.GetPlace()); + auto x = EigenVector::Flatten(*input); + auto out = EigenScalar::From(*output); + auto& place = + *context.template device_context().eigen_device(); + auto reduce_dim = Eigen::array({{0}}); + Functor functor; + functor(place, x, out, reduce_dim); + } else { + int rank = context.Input("X")->dims().size(); + switch (rank) { + case 1: + ReduceCompute<1>(context); + break; + case 2: + ReduceCompute<2>(context); + break; + case 3: + ReduceCompute<3>(context); + break; + case 4: + ReduceCompute<4>(context); + break; + case 5: + ReduceCompute<5>(context); + break; + case 6: + ReduceCompute<6>(context); + break; + } } } @@ -157,26 +174,46 @@ template class ReduceGradKernel : public framework::OpKernel { public: void Compute(const framework::ExecutionContext& context) const override { - int rank = context.Input("X")->dims().size(); - switch (rank) { - case 1: - ReduceGradCompute<1>(context); - break; - case 2: - ReduceGradCompute<2>(context); - break; - case 3: - ReduceGradCompute<3>(context); - break; - case 4: - ReduceGradCompute<4>(context); - break; - case 5: - ReduceGradCompute<5>(context); - break; - case 6: - ReduceGradCompute<6>(context); - break; + bool reduce_all = context.Attr("reduce_all"); + if (reduce_all) { + auto* input0 = context.Input("X"); + auto* input1 = context.Input("Out"); + auto* input2 = context.Input(framework::GradVarName("Out")); + auto* output = context.Output(framework::GradVarName("X")); + output->mutable_data(context.GetPlace()); + auto x = EigenVector::Flatten(*input0); + auto x_reduce = EigenVector::From(*input1); + auto x_reduce_grad = EigenVector::From(*input2); + auto x_grad = EigenVector::Flatten(*output); + auto& place = + *context.template device_context().eigen_device(); + auto broadcast_dim = + Eigen::array({{static_cast(input0->numel())}}); + Functor functor; + functor(place, x, x_reduce, x_grad, x_reduce_grad, broadcast_dim, + broadcast_dim[0]); + } else { + int rank = context.Input("X")->dims().size(); + switch (rank) { + case 1: + ReduceGradCompute<1>(context); + break; + case 2: + ReduceGradCompute<2>(context); + break; + case 3: + ReduceGradCompute<3>(context); + break; + case 4: + ReduceGradCompute<4>(context); + break; + case 5: + ReduceGradCompute<5>(context); + break; + case 6: + ReduceGradCompute<6>(context); + break; + } } } diff --git a/paddle/operators/reshape_op.cc b/paddle/operators/reshape_op.cc index 7fd33bf662a1d0b7b6fa4e772bdadbf34b2f4fdd..d82d828747c0c822195b699359b8e62d1cf7e92d 100644 --- a/paddle/operators/reshape_op.cc +++ b/paddle/operators/reshape_op.cc @@ -34,21 +34,33 @@ class ReshapeOp : public framework::OperatorWithKernel { auto shape = ctx->Attrs().Get>("shape"); PADDLE_ENFORCE(shape.size() > 0, "Attr(shape) shouldn't be empty."); auto x_dims = ctx->GetInputDim("X"); - // TODO(qiao) change batch_size - for (size_t i = 1; i < shape.size(); ++i) { - PADDLE_ENFORCE(shape[i] > 0, - "Each dimension of Attr(shape) " - "must be positive except the first one."); - } - if (shape[0] < 0) { - shape[0] = x_dims[0]; + + std::vector neg_dims_idx; + // set some dimension to -1 if it is unknown + const int unknown_size = -1; + for (size_t i = 0; i < shape.size(); ++i) { + PADDLE_ENFORCE(shape[i] > 0 || shape[i] == unknown_size, + "Each dimension of Attr(shape) must be positive or %d.", + unknown_size); + if (shape[i] == unknown_size) { + neg_dims_idx.push_back(i); + PADDLE_ENFORCE(neg_dims_idx.size() <= 1, + "Only one dimension of Attr(shape) can be unknown."); + } } - // capacity check + int64_t capacity = std::accumulate(shape.begin(), shape.end(), 1, std::multiplies()); int64_t in_size = framework::product(x_dims); - PADDLE_ENFORCE_EQ(capacity, in_size, - "The size of Input(X) mismatches with Attr(shape)."); + if (neg_dims_idx.size() == 1) { + // dim infer + shape[neg_dims_idx[0]] = in_size / (-capacity); + // recalculate capacity + capacity = shape[neg_dims_idx[0]] * (-capacity); + } + // capacity check + PADDLE_ENFORCE(capacity == in_size, + "The size of Input(X) mismatches with Attr(shape)."); // resize output std::vector shape_int64(shape.size(), 0); std::transform(shape.begin(), shape.end(), shape_int64.begin(), @@ -88,6 +100,9 @@ the tensor X into a 2-D tensor: [[1, 2, 3, 4]] +One dimension in the target shape can be set -1, representing that its +size is unknown. In this case, the real dimension will be infered from +the original shape of Input(X) and other dimensions in the target shape. )DOC"); } }; diff --git a/paddle/operators/while_op.cc b/paddle/operators/while_op.cc index b8e44bcc5a99380fdf08cc2819b20045695eaf87..9a092a570ff1f3f529413cd44dff55147adbaadc 100644 --- a/paddle/operators/while_op.cc +++ b/paddle/operators/while_op.cc @@ -25,7 +25,7 @@ namespace operators { using StepScopeVar = std::vector; using LoDTensor = framework::LoDTensor; -constexpr char kStepBlock[] = "step_block"; +constexpr char kStepBlock[] = "sub_block"; constexpr char kCondition[] = "Condition"; constexpr char kStepScopes[] = "StepScopes"; constexpr char kParameters[] = "X"; diff --git a/paddle/platform/device_context.cc b/paddle/platform/device_context.cc index 2c7f96421621b9a34d1ec96c13d9c354a0d4012c..1c72b5055971e73c7aa560a61ca9d3c48dc56fbc 100644 --- a/paddle/platform/device_context.cc +++ b/paddle/platform/device_context.cc @@ -125,6 +125,22 @@ cudnnHandle_t CUDADeviceContext::cudnn_handle() const { return cudnn_handle_; } cudaStream_t CUDADeviceContext::stream() const { return stream_; } +CudnnDeviceContext::CudnnDeviceContext(CudnnPlace place) + : CUDADeviceContext(place), place_(place) { + PADDLE_ENFORCE(dynload::cudnnCreate(&cudnn_handle_)); + PADDLE_ENFORCE(dynload::cudnnSetStream(cudnn_handle_, stream())); +} + +CudnnDeviceContext::~CudnnDeviceContext() { + SetDeviceId(place_.device); + Wait(); + PADDLE_ENFORCE(dynload::cudnnDestroy(cudnn_handle_)); +} + +Place CudnnDeviceContext::GetPlace() const { return CudnnPlace(); } + +cudnnHandle_t CudnnDeviceContext::cudnn_handle() const { return cudnn_handle_; } + #endif } // namespace platform diff --git a/paddle/platform/device_context.h b/paddle/platform/device_context.h index 596d9d0bba420a47fc10cc9dd96a755daa35dbac..f67194993db1f4160bd6894b2c845a82f4da2354 100644 --- a/paddle/platform/device_context.h +++ b/paddle/platform/device_context.h @@ -86,6 +86,22 @@ class CUDADeviceContext : public DeviceContext { cublasHandle_t cublas_handle_; }; +class CudnnDeviceContext : public CUDADeviceContext { + public: + explicit CudnnDeviceContext(CudnnPlace place); + virtual ~CudnnDeviceContext(); + + /*! \brief Return place in the device context. */ + Place GetPlace() const final; + + /*! \brief Return cudnn handle in the device context. */ + cudnnHandle_t cudnn_handle() const; + + private: + cudnnHandle_t cudnn_handle_; + CudnnPlace place_; +}; + #endif } // namespace platform diff --git a/paddle/platform/device_context_test.cc b/paddle/platform/device_context_test.cc index 4893cd92f6a74f7992c279ebd51232049f29e853..be3b2af5af09cb18f5156412ff60a7fc15a16487 100644 --- a/paddle/platform/device_context_test.cc +++ b/paddle/platform/device_context_test.cc @@ -46,3 +46,19 @@ TEST(Device, CUDADeviceContext) { delete device_context; } } + +TEST(Device, CudnnDeviceContext) { + using paddle::platform::CudnnDeviceContext; + using paddle::platform::CudnnPlace; + if (paddle::platform::dynload::HasCUDNN()) { + int count = paddle::platform::GetCUDADeviceCount(); + for (int i = 0; i < count; ++i) { + CudnnDeviceContext* device_context = + new CudnnDeviceContext(CudnnPlace(i)); + cudnnHandle_t cudnn_handle = device_context->cudnn_handle(); + ASSERT_NE(nullptr, cudnn_handle); + ASSERT_NE(nullptr, device_context->stream()); + delete device_context; + } + } +} diff --git a/paddle/platform/dynload/nccl.cc b/paddle/platform/dynload/nccl.cc index 8f92b8d94d56047b7d3fb43b15e3c06575c8d57b..91168f37effff3f8b864b6bb2ede070cb0a976fa 100644 --- a/paddle/platform/dynload/nccl.cc +++ b/paddle/platform/dynload/nccl.cc @@ -25,6 +25,11 @@ void *nccl_dso_handle; NCCL_RAND_ROUTINE_EACH(DEFINE_WRAP); +void LoadNCCLDSO() { + platform::call_once(nccl_dso_flag, + [] { GetNCCLDsoHandle(&nccl_dso_handle); }); +} + } // namespace dynload } // namespace platform } // namespace paddle diff --git a/paddle/platform/dynload/nccl.h b/paddle/platform/dynload/nccl.h index 981b2ab258a34ce92f02ee12b5957f88ba61d1c0..11007c1031c6d224c475e9ef4f11e7797decd78e 100644 --- a/paddle/platform/dynload/nccl.h +++ b/paddle/platform/dynload/nccl.h @@ -28,18 +28,18 @@ extern std::once_flag nccl_dso_flag; extern void* nccl_dso_handle; #ifdef PADDLE_USE_DSO -#define DECLARE_DYNAMIC_LOAD_NCCL_WRAP(__name) \ - struct DynLoad__##__name { \ - template \ - auto operator()(Args... args) -> decltype(__name(args...)) { \ - using nccl_func = decltype(__name(args...)) (*)(Args...); \ - platform::call_once(nccl_dso_flag, \ - paddle::platform::dynload::GetNCCLDsoHandle, \ - &nccl_dso_handle); \ - void* p_##__name = dlsym(nccl_dso_handle, #__name); \ - return reinterpret_cast(p_##__name)(args...); \ - } \ - }; \ +extern void LoadNCCLDSO(); + +#define DECLARE_DYNAMIC_LOAD_NCCL_WRAP(__name) \ + struct DynLoad__##__name { \ + template \ + auto operator()(Args... args) -> decltype(__name(args...)) { \ + using nccl_func = decltype(__name(args...)) (*)(Args...); \ + paddle::platform::dynload::LoadNCCLDSO(); \ + void* p_##__name = dlsym(nccl_dso_handle, #__name); \ + return reinterpret_cast(p_##__name)(args...); \ + } \ + }; \ extern DynLoad__##__name __name #else #define DECLARE_DYNAMIC_LOAD_NCCL_WRAP(__name) \ diff --git a/paddle/platform/gpu_info.cc b/paddle/platform/gpu_info.cc index 4fa2eaed31c6e9368459c2da6f8b0667b453d58c..541eca5f39c2e6a4b464aec79fd8a920ab4c7732 100644 --- a/paddle/platform/gpu_info.cc +++ b/paddle/platform/gpu_info.cc @@ -73,19 +73,20 @@ size_t GpuMaxChunkSize() { size_t available = 0; GpuMemoryUsage(available, total); - - // Reserving the rest memory for page tables, etc. - size_t reserving = 0.05 * total; - + VLOG(10) << "GPU Usage " << available / 1024 / 1024 << "M/" + << total / 1024 / 1024 << "M"; + size_t reserving = static_cast(0.05 * total); // If available less than minimum chunk size, no usable memory exists. available = - std::max(std::max(available, GpuMinChunkSize()) - GpuMinChunkSize(), - reserving) - - reserving; + std::min(std::max(available, GpuMinChunkSize()) - GpuMinChunkSize(), + total - reserving); + + // Reserving the rest memory for page tables, etc. - size_t allocating = FLAGS_fraction_of_gpu_memory_to_use * total; + size_t allocating = static_cast(FLAGS_fraction_of_gpu_memory_to_use * + (total - reserving)); - PADDLE_ENFORCE_LT(allocating, available); + PADDLE_ENFORCE_LE(allocating, available); return allocating; } diff --git a/paddle/platform/nccl_test.cu b/paddle/platform/nccl_test.cu index c99dae68bef67c58d3efea42fef45e84bb3d9255..94ab360a1967d22e73bc6aefc0301487537c97f7 100644 --- a/paddle/platform/nccl_test.cu +++ b/paddle/platform/nccl_test.cu @@ -31,7 +31,7 @@ namespace platform { TEST(NCCL, init) { std::vector comms; comms.resize(dev_count); - PADDLE_ENFORCE(dynload::ncclCommInitAll(comms.data(), dev_count, nullptr)); + dynload::ncclCommInitAll(comms.data(), dev_count, nullptr); for (int i = 0; i < dev_count; ++i) { dynload::ncclCommDestroy(comms[i]); } @@ -62,7 +62,7 @@ TEST(NCCL, all_reduce) { std::vector comms; comms.resize(dev_count); VLOG(1) << "Initializing ncclComm"; - PADDLE_ENFORCE(dynload::ncclCommInitAll(comms.data(), dev_count, nullptr)); + dynload::ncclCommInitAll(comms.data(), dev_count, nullptr); VLOG(1) << "ncclComm initialized"; VLOG(1) << "Creating thread data"; std::vector>> data; diff --git a/paddle/platform/place.cc b/paddle/platform/place.cc index 856e54df89c1c18ade040957188a2fbda0901473..25fe8d21b49b07a6afe2938245906dc1bdd90398 100644 --- a/paddle/platform/place.cc +++ b/paddle/platform/place.cc @@ -23,6 +23,7 @@ class PlacePrinter : public boost::static_visitor<> { public: explicit PlacePrinter(std::ostream &os) : os_(os) {} void operator()(const CPUPlace &) { os_ << "CPUPlace"; } + void operator()(const MKLDNNPlace &) { os_ << "MKLDNNPlace"; } void operator()(const GPUPlace &p) { os_ << "GPUPlace(" << p.device << ")"; } private: @@ -38,12 +39,17 @@ const Place &get_place() { return the_default_place; } const GPUPlace default_gpu() { return GPUPlace(0); } const CPUPlace default_cpu() { return CPUPlace(); } +const MKLDNNPlace default_mkldnn() { return MKLDNNPlace(); } bool is_gpu_place(const Place &p) { return boost::apply_visitor(IsGPUPlace(), p); } bool is_cpu_place(const Place &p) { - return !boost::apply_visitor(IsGPUPlace(), p); + return !is_gpu_place(p) && !is_mkldnn_place(p); +} + +bool is_mkldnn_place(const Place &p) { + return boost::apply_visitor(IsMKLDNNPlace(), p); } bool places_are_same_class(const Place &p1, const Place &p2) { diff --git a/paddle/platform/place.h b/paddle/platform/place.h index 5370360a7de26e409a1545182a12d3df1f37658b..4526945792b2ea96cc4e9df11d8f35897cba7526 100644 --- a/paddle/platform/place.h +++ b/paddle/platform/place.h @@ -31,6 +31,14 @@ struct CPUPlace { inline bool operator!=(const CPUPlace &) const { return false; } }; +struct MKLDNNPlace { + MKLDNNPlace() {} + + // needed for variant equality comparison + inline bool operator==(const MKLDNNPlace &) const { return true; } + inline bool operator!=(const MKLDNNPlace &) const { return false; } +}; + struct GPUPlace { GPUPlace() : GPUPlace(0) {} explicit GPUPlace(int d) : device(d) {} @@ -43,16 +51,28 @@ struct GPUPlace { int device; }; +struct CudnnPlace : public GPUPlace { + CudnnPlace() : GPUPlace() {} + explicit CudnnPlace(int d) : GPUPlace(d) {} +}; + struct IsGPUPlace : public boost::static_visitor { bool operator()(const CPUPlace &) const { return false; } + bool operator()(const MKLDNNPlace &) const { return false; } bool operator()(const GPUPlace &gpu) const { return true; } }; +struct IsMKLDNNPlace : public boost::static_visitor { + bool operator()(const MKLDNNPlace &) const { return true; } + bool operator()(const CPUPlace &) const { return false; } + bool operator()(const GPUPlace &) const { return false; } +}; + // Define the max number of Place in bit length. i.e., the max number of places // should be less equal than 2^(NUM_PLACE_TYPE_LIMIT_IN_BIT) #define NUM_PLACE_TYPE_LIMIT_IN_BIT 4 -typedef boost::variant Place; +typedef boost::variant Place; // static check number of place types is less equal than // 2^(NUM_PLACE_TYPE_LIMIT_IN_BIT) @@ -65,9 +85,11 @@ const Place &get_place(); const GPUPlace default_gpu(); const CPUPlace default_cpu(); +const MKLDNNPlace default_mkldnn(); bool is_gpu_place(const Place &); bool is_cpu_place(const Place &); +bool is_mkldnn_place(const Place &); bool places_are_same_class(const Place &, const Place &); std::ostream &operator<<(std::ostream &, const Place &); diff --git a/paddle/platform/place_test.cc b/paddle/platform/place_test.cc index 33e2e5a439ce6801c02daba4bcbd462a74d7a614..184af12c230f1ccd7826e507f16f4e91ca380a45 100644 --- a/paddle/platform/place_test.cc +++ b/paddle/platform/place_test.cc @@ -21,9 +21,15 @@ TEST(Place, Default) { EXPECT_TRUE(paddle::platform::is_gpu_place(paddle::platform::get_place())); EXPECT_TRUE(paddle::platform::is_gpu_place(paddle::platform::default_gpu())); EXPECT_TRUE(paddle::platform::is_cpu_place(paddle::platform::default_cpu())); + EXPECT_TRUE( + paddle::platform::is_mkldnn_place(paddle::platform::default_mkldnn())); paddle::platform::set_place(paddle::platform::CPUPlace()); EXPECT_TRUE(paddle::platform::is_cpu_place(paddle::platform::get_place())); + + paddle::platform::set_place(paddle::platform::MKLDNNPlace()); + EXPECT_FALSE(paddle::platform::is_cpu_place(paddle::platform::get_place())); + EXPECT_TRUE(paddle::platform::is_mkldnn_place(paddle::platform::get_place())); } TEST(Place, Print) { diff --git a/paddle/platform/variant.h b/paddle/platform/variant.h index 619897ca19eb2e6f4dbfd9160edf8c4bc58c89a9..284b4c42ac068b23737c12d3f147bbceb135dacc 100644 --- a/paddle/platform/variant.h +++ b/paddle/platform/variant.h @@ -14,6 +14,19 @@ #pragma once +#ifdef __CUDACC__ +#ifdef __CUDACC_VER_MAJOR__ +// CUDA 9 define `__CUDACC_VER__` as a warning message, manually define +// __CUDACC_VER__ instead. +#undef __CUDACC_VER__ + +#define __CUDACC_VER__ \ + (__CUDACC_VER_MAJOR__ * 10000 + __CUDACC_VER_MINOR__ * 100 + \ + __CUDACC_VER_BUILD__) +#endif + +#endif + #include #ifdef PADDLE_WITH_CUDA diff --git a/paddle/pybind/pybind.cc b/paddle/pybind/pybind.cc index c16d3e0cbe01f90a5aa9a5d7a523cd4e282e4771..1faf24bcb8828596ec37abde9e699f46526e41df 100644 --- a/paddle/pybind/pybind.cc +++ b/paddle/pybind/pybind.cc @@ -282,6 +282,23 @@ All parameter, weight, gradient are variables in Paddle. } return ret_values; }); + m.def("get_grad_op_descs", + [](const OpDescBind &op_desc, + const std::unordered_set &no_grad_set, + std::unordered_map &grad_to_var, + const std::vector &grad_sub_block) { + std::vector> grad_op_descs = + framework::OpInfoMap::Instance() + .Get(op_desc.Type()) + .GradOpMaker()(op_desc, no_grad_set, &grad_to_var, + grad_sub_block); + std::vector grad_op_desc_ptrs(grad_op_descs.size()); + std::transform( + grad_op_descs.begin(), grad_op_descs.end(), + grad_op_desc_ptrs.begin(), + [](std::unique_ptr &p) { return p.release(); }); + return grad_op_desc_ptrs; + }); m.def("prune", [](const ProgramDescBind &origin, const std::vector> &targets) { ProgramDescBind prog_with_targets(origin); diff --git a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/Dockerfile b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/Dockerfile index 1a2d19e823541750830fcaa25f65b2f8e1ea2b49..c2f631bdf4ed52a5dfa3fbcf1157d0abbdeadb9b 100644 --- a/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/Dockerfile +++ b/paddle/scripts/cluster_train_v2/openmpi/docker_cluster/Dockerfile @@ -1,7 +1,7 @@ # Build this image: docker build -t mpi . # -FROM paddledev/paddle:0.10.0rc3 +FROM paddlepaddle/paddle:0.10.0rc3 ENV DEBIAN_FRONTEND noninteractive diff --git a/paddle/scripts/tools/build_docs/build_docs.sh b/paddle/scripts/tools/build_docs/build_docs.sh index c6cbbc4eef94fb2e2fc3c1ce71734fbb23fc22d7..f9bc8bf63ae9afdfca1ff660bc83e62e71f03005 100755 --- a/paddle/scripts/tools/build_docs/build_docs.sh +++ b/paddle/scripts/tools/build_docs/build_docs.sh @@ -5,4 +5,4 @@ docker run --rm \ -e "WITH_AVX=ON" \ -e "WITH_DOC=ON" \ -e "WOBOQ=ON" \ - ${1:-"paddledev/paddle:dev"} + ${1:-"paddlepaddle/paddle:latest-dev"} diff --git a/python/.gitignore b/python/.gitignore index cc7d0ece4acaba2a3fa38a89110587fe8dffb992..1ba1d4c9b0301ed920f5303089e75dd3a8e4e3fa 100644 --- a/python/.gitignore +++ b/python/.gitignore @@ -2,6 +2,7 @@ build dist paddle.egg-info +paddlepaddle_gpu.egg-info .idea paddle/proto/*.py paddle/proto/*.pyc diff --git a/python/paddle/v2/fluid/evaluator.py b/python/paddle/v2/fluid/evaluator.py index 137c5736226b689340748d5098ca51659d5acff8..2d23ff0a1662026a41409c38dc76f066da896505 100644 --- a/python/paddle/v2/fluid/evaluator.py +++ b/python/paddle/v2/fluid/evaluator.py @@ -4,7 +4,7 @@ import layers from framework import Program, unique_name, Variable from layer_helper import LayerHelper -__all__ = ['Accuracy'] +__all__ = ['Accuracy', 'ChunkEvaluator'] def _clone_var_(block, var): @@ -132,3 +132,74 @@ class Accuracy(Evaluator): correct = layers.cast(correct, dtype='float32', **kwargs) out = layers.elementwise_div(x=correct, y=total, **kwargs) return np.array(executor.run(eval_program, fetch_list=[out])[0]) + + +class ChunkEvaluator(Evaluator): + """ + Accumulate counter numbers output by chunk_eval from mini-batches and + compute the precision recall and F1-score using the accumulated counter + numbers. + """ + + def __init__(self, + input, + label, + chunk_scheme, + num_chunk_types, + excluded_chunk_types=None, + **kwargs): + super(ChunkEvaluator, self).__init__("chunk_eval", **kwargs) + main_program = self.helper.main_program + if main_program.current_block().idx != 0: + raise ValueError("You can only invoke Evaluator in root block") + + self.num_infer_chunks = self.create_state( + dtype='int64', shape=[1], suffix='num_infer_chunks') + self.num_label_chunks = self.create_state( + dtype='int64', shape=[1], suffix='num_label_chunks') + self.num_correct_chunks = self.create_state( + dtype='int64', shape=[1], suffix='num_correct_chunks') + kwargs = {'main_program': main_program} + precision, recall, f1_score, num_infer_chunks, num_label_chunks, num_correct_chunks = layers.chunk_eval( + input=input, + label=label, + chunk_scheme=chunk_scheme, + num_chunk_types=num_chunk_types, + excluded_chunk_types=excluded_chunk_types, + **kwargs) + layers.sums( + input=[self.num_infer_chunks, num_infer_chunks], + out=self.num_infer_chunks, + **kwargs) + layers.sums( + input=[self.num_label_chunks, num_label_chunks], + out=self.num_label_chunks, + **kwargs) + layers.sums( + input=[self.num_correct_chunks, num_correct_chunks], + out=self.num_correct_chunks, + **kwargs) + + self.metrics.extend([precision, recall, f1_score]) + + def eval(self, executor, eval_program=None): + if eval_program is None: + eval_program = Program() + block = eval_program.current_block() + kwargs = {'main_program': eval_program} + num_infer_chunks, num_label_chunks, num_correct_chunks = executor.run( + eval_program, + fetch_list=[_clone_var_(block, state) for state in self.states]) + num_infer_chunks = num_infer_chunks[0] + num_label_chunks = num_label_chunks[0] + num_correct_chunks = num_correct_chunks[0] + precision = float( + num_correct_chunks) / num_infer_chunks if num_infer_chunks else 0 + recall = float( + num_correct_chunks) / num_label_chunks if num_label_chunks else 0 + f1_score = float(2 * precision * recall) / ( + precision + recall) if num_correct_chunks else 0 + return np.array( + [precision], dtype='float32'), np.array( + [recall], dtype='float32'), np.array( + [f1_score], dtype='float32') diff --git a/python/paddle/v2/fluid/layers/__init__.py b/python/paddle/v2/fluid/layers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..249f570e13b7a1b50397fb971d1c6f77e0359a5e --- /dev/null +++ b/python/paddle/v2/fluid/layers/__init__.py @@ -0,0 +1,17 @@ +import ops +from ops import * +import nn +from nn import * +import io +from io import * +import tensor +from tensor import * +import control_flow +from control_flow import * + +__all__ = [] +__all__ += nn.__all__ +__all__ += io.__all__ +__all__ += tensor.__all__ +__all__ += control_flow.__all__ +__all__ += ops.__all__ diff --git a/python/paddle/v2/fluid/layers.py b/python/paddle/v2/fluid/layers/control_flow.py similarity index 50% rename from python/paddle/v2/fluid/layers.py rename to python/paddle/v2/fluid/layers/control_flow.py index bdb9e1ba8c6b1798b5a69e5ef46648fda544378b..5af6c789773fe80ceed99c69a419f18cf2db8d37 100644 --- a/python/paddle/v2/fluid/layers.py +++ b/python/paddle/v2/fluid/layers/control_flow.py @@ -1,424 +1,18 @@ +from ..layer_helper import LayerHelper, unique_name +from ..framework import Program, Variable, Operator +from .. import core +from tensor import assign, fill_constant import contextlib -import proto.framework_pb2 as framework_pb2 -import core -from framework import OpProtoHolder, Variable, Program, Operator -from initializer import Constant, Normal, Xavier, Initializer -from paddle.v2.fluid.layer_helper import LayerHelper, unique_name -from registry import register_layer -from param_attr import ParamAttr - __all__ = [ - 'fc', 'data', 'cross_entropy', 'conv2d', 'pool2d', 'embedding', 'concat', - 'StaticRNN', 'cast', 'sequence_conv', 'sequence_pool', 'sums', 'cos_sim', - 'batch_norm', 'accuracy', 'split_lod_tensor', 'While', 'seq_expand' -] - -_REGISTER_LAYER_FROM_OPS = [ - 'mean', 'mul', 'dropout', 'reshape', 'sigmoid', 'scale', 'transpose', - 'sigmoid_cross_entropy_with_logits', 'elementwise_add', 'elementwise_div', - 'elementwise_sub', 'elementwise_mul', 'clip', 'abs' + 'split_lod_tensor', 'merge_lod_tensor', 'BlockGuard', 'StaticRNNGuard', + 'StaticRNNMemoryLink', 'WhileGuard', 'While', 'lod_rank_table', + 'max_sequence_len', 'topk', 'lod_tensor_to_array', 'array_to_lod_tensor', + 'increment', 'array_write', 'create_array', 'less_than', 'array_read', + 'shrink_memory', 'array_length', 'IfElse', 'DynamicRNN', 'ConditionalBlock', + 'StaticRNN' ] -for _OP in set(_REGISTER_LAYER_FROM_OPS): - globals()[_OP] = register_layer(_OP) - __all__.append(_OP) - - -def fc(input, - size, - num_flatten_dims=1, - param_attr=None, - bias_attr=None, - act=None, - name=None, - main_program=None, - startup_program=None): - """ - Fully Connected Layer. - - Args: - input: The input tensor to the function - size: The size of the layer - num_flatten_dims: Number of columns in input - param_attr: The parameters/weights to the FC Layer - param_initializer: Initializer used for the weight/parameter. If None, XavierInitializer() is used - bias_attr: The bias parameter for the FC layer - bias_initializer: Initializer used for the bias. If None, then ConstantInitializer() is used - act: Activation to be applied to the output of FC layer - name: Name/alias of the function - main_program: Name of the main program that calls this - startup_program: Name of the startup program - - This function can take in multiple inputs and performs the Fully Connected - function (linear transformation) on top of each of them. - So for input x, the output will be : Wx + b. Where W is the parameter, - b the bias and x is the input. - - The function also applies an activation (non-linearity) on top of the - output, if activation is passed in the input. - - All the input variables of this function are passed in as local variables - to the LayerHelper constructor. - - """ - helper = LayerHelper('fc', **locals()) - - dtype = helper.input_dtype() - - mul_results = [] - for input_var, param_attr in helper.iter_inputs_and_params(): - input_shape = input_var.shape - param_shape = [ - reduce(lambda a, b: a * b, input_shape[num_flatten_dims:], 1) - ] + [size] - w = helper.create_parameter( - attr=param_attr, shape=param_shape, dtype=dtype, is_bias=False) - tmp = helper.create_tmp_variable(dtype) - helper.append_op( - type="mul", - inputs={ - "X": input_var, - "Y": w, - }, - outputs={"Out": tmp}, - attrs={'x_num_col_dims': num_flatten_dims, - 'y_num_col_dims': 1}) - mul_results.append(tmp) - - # sum - if len(mul_results) == 1: - pre_bias = mul_results[0] - else: - pre_bias = helper.create_tmp_variable(dtype) - helper.append_op( - type="sum", inputs={"X": mul_results}, outputs={"Out": pre_bias}) - # add bias - pre_activation = helper.append_bias_op(pre_bias) - # add activation - return helper.append_activation(pre_activation) - - -def embedding(input, - size, - is_sparse=False, - param_attr=None, - dtype='float32', - main_program=None, - startup_program=None): - """ - Embedding Layer. - - Args: - param_initializer: - input: The input to the function - size: The size of the layer - is_sparse: A flag that decleares whether the input is sparse - param_attr: Parameters for this layer - dtype: The type of data : float32, float_16, int etc - main_program: Name of the main program that calls this - startup_program: Name of the startup program - - This function can take in the input (which is a vector of IDs) and - performs a lookup in the lookup_table using these IDs, to result into - the embedding of each ID in the input. - - All the input variables of this function are passed in as local variables - to the LayerHelper constructor. - - """ - - helper = LayerHelper('embedding', **locals()) - w = helper.create_parameter( - attr=helper.param_attr, shape=size, dtype=dtype, is_bias=False) - tmp = helper.create_tmp_variable(dtype) - helper.append_op( - type='lookup_table', - inputs={'Ids': input, - 'W': w}, - outputs={'Out': tmp}, - attrs={'is_sparse': is_sparse}) - return tmp - - -# TODO(qijun): expose H0 and C0 -def dynamic_lstm(input, - size, - param_attr=None, - bias_attr=None, - use_peepholes=True, - is_reverse=False, - gate_activation='sigmoid', - cell_activation='tanh', - candidate_activation='tanh', - dtype='float32', - main_program=None, - startup_program=None): - helper = LayerHelper('lstm', **locals()) - size = size / 4 - weight = helper.create_parameter( - attr=helper.param_attr, shape=[size, 4 * size], dtype=dtype) - bias_size = [1, 7 * size] - if not use_peepholes: - bias_size[1] = 4 * size - bias = helper.create_parameter( - attr=helper.bias_attr, shape=bias_size, dtype=dtype, is_bias=True) - - hidden = helper.create_tmp_variable(dtype) - cell = helper.create_tmp_variable(dtype) - batch_gate = helper.create_tmp_variable(dtype) - batch_cell_pre_act = helper.create_tmp_variable(dtype) - - helper.append_op( - type='lstm', - inputs={'Input': input, - 'Weight': weight, - 'Bias': bias}, - outputs={ - 'Hidden': hidden, - 'Cell': cell, - 'BatchGate': batch_gate, - 'BatchCellPreAct': batch_cell_pre_act - }, - attrs={ - 'use_peepholes': use_peepholes, - 'is_reverse': is_reverse, - 'gate_activation': gate_activation, - 'cell_activation': cell_activation, - 'candidate_activation': candidate_activation - }) - return hidden, cell - - -def gru_unit(input, - hidden, - size, - weight=None, - bias=None, - activation='tanh', - gate_activation='sigmoid', - main_program=None, - startup_program=None): - """ - GRUUnit Operator implements partial calculations of the GRU unit as following: - - $$ - update \ gate: u_t = actGate(xu_t + W_u * h_{t-1} + b_u) \\ - reset \ gate: r_t = actGate(xr_t + W_r * h_{t-1} + b_r) \\ - output \ candidate: {h}_t = actNode(xc_t + W_c * dot(r_t, h_{t-1}) + b_c) \\ - output: h_t = dot((1 - u_t), h_{t-1}) + dot(u_t, {h}_t) - $$ - - which is same as one time step of GRU Operator. - - @note To implement the complete GRU unit, fully-connected operator must be - used before to feed xu, xr and xc as the Input of GRUUnit operator. - - TODO(ChunweiYan) add more document here - """ - activation_dict = dict( - identity=0, - sigmoid=1, - tanh=2, - relu=3, ) - activation = activation_dict[activation] - gate_activation = activation_dict[gate_activation] - - helper = LayerHelper('gru_unit', **locals()) - dtype = helper.input_dtype() - size = size / 3 - - # create weight - if weight is None: - weight = helper.create_parameter( - attr=helper.param_attr, shape=[size, 3 * size], dtype=dtype) - - # create bias - if bias is None: - bias_size = [1, 3 * size] - bias = helper.create_parameter( - attr=helper.bias_attr, shape=bias_size, dtype=dtype, is_bias=True) - - gate = helper.create_tmp_variable(dtype) - reset_hidden_pre = helper.create_tmp_variable(dtype) - updated_hidden = helper.create_tmp_variable(dtype) - - helper.append_op( - type='gru_unit', - inputs={'Input': input, - 'HiddenPrev': hidden, - 'Weight': weight}, - outputs={ - 'Gate': gate, - 'ResetHiddenPrev': reset_hidden_pre, - 'Hidden': updated_hidden, - }, - attrs={ - 'activation': 0, - 'gate_activation': 1, - }) - - return updated_hidden, reset_hidden_pre, gate - - -def data(name, - shape, - append_batch_size=True, - dtype='float32', - lod_level=0, - type=core.VarDesc.VarType.LOD_TENSOR, - main_program=None, - startup_program=None, - stop_gradient=True): - """ - Data Layer. - - Args: - name: The name/alias of the function - shape: Tuple declaring the shape. - append_batch_size: Whether or not to append the data as a batch. - dtype: The type of data : float32, float_16, int etc - type: The output type. By default it is LOD_TENSOR. - lod_level(int): The LoD Level. 0 means the input data is not a sequence. - main_program: Name of the main program that calls this - startup_program: Name of the startup program - stop_gradient: A boolean that mentions whether gradient should flow. - - This function takes in input and based on whether data has - to be returned back as a minibatch, it creates the global variable using - the helper functions. The global variables can be accessed by all the - following operations and layers in the graph. - - All the input variables of this function are passed in as local variables - to the LayerHelper constructor. - - """ - helper = LayerHelper('data', **locals()) - shape = list(shape) - for i in xrange(len(shape)): - if shape[i] is None: - shape[i] = -1 - append_batch_size = False - elif shape[i] < 0: - append_batch_size = False - - if append_batch_size: - shape = [-1] + shape # append batch size as -1 - - return helper.create_global_variable( - name=name, - shape=shape, - dtype=dtype, - type=type, - stop_gradient=stop_gradient, - lod_level=lod_level) - - -def create_tensor(dtype, name=None, main_program=None, startup_program=None): - helper = LayerHelper("create_tensor", **locals()) - return helper.create_variable(name=helper.name, dtype=dtype) - - -def cast(x, dtype, main_program=None): - """ - This function takes in the input with input_dtype - and casts it to the output_dtype as the output. - """ - helper = LayerHelper('cast', **locals()) - out = helper.create_tmp_variable(dtype=dtype) - helper.append_op( - type='cast', - inputs={'X': [x]}, - outputs={'Out': [out]}, - attrs={'in_dtype': x.dtype, - 'out_dtype': out.dtype}) - return out - - -def concat(input, axis, main_program=None, startup_program=None): - """ - This function concats the input along the axis mentioned - and returns that as the output. - """ - helper = LayerHelper('concat', **locals()) - out = helper.create_tmp_variable(dtype=helper.input_dtype()) - helper.append_op( - type='concat', - inputs={'X': input}, - outputs={'Out': [out]}, - attrs={'axis': axis}) - return out - - -def sums(input, out=None, main_program=None, startup_program=None): - """ - This function takes in the input and performs the sum operation on it - and returns that as the output. - """ - helper = LayerHelper('sum', **locals()) - if out is None: - out = helper.create_tmp_variable(dtype=helper.input_dtype()) - helper.append_op(type='sum', inputs={'X': input}, outputs={'Out': out}) - return out - - -def linear_chain_crf(input, - label, - param_attr=None, - main_program=None, - startup_program=None): - helper = LayerHelper('linear_chain_crf', **locals()) - size = input.shape[1] - transition = helper.create_parameter( - attr=helper.param_attr, - shape=[size + 2, size], - dtype=helper.input_dtype()) - alpha = helper.create_tmp_variable(dtype=helper.input_dtype()) - emission_exps = helper.create_tmp_variable(dtype=helper.input_dtype()) - transition_exps = helper.create_tmp_variable(dtype=helper.input_dtype()) - log_likelihood = helper.create_tmp_variable(dtype=helper.input_dtype()) - helper.append_op( - type='linear_chain_crf', - inputs={"Emission": [input], - "Transition": transition, - "Label": label}, - outputs={ - "Alpha": [alpha], - "EmissionExps": [emission_exps], - "TransitionExps": transition_exps, - "LogLikelihood": log_likelihood - }) - - return log_likelihood - - -def crf_decoding(input, - param_attr, - label=None, - main_program=None, - startup_program=None): - helper = LayerHelper('crf_decoding', **locals()) - transition = helper.get_parameter(param_attr.name) - viterbi_path = helper.create_tmp_variable(dtype=helper.input_dtype()) - helper.append_op( - type='crf_decoding', - inputs={"Emission": [input], - "Transition": transition, - "Label": label}, - outputs={"ViterbiPath": [viterbi_path]}) - - return viterbi_path - - -def assign(input, output, main_program=None, startup_program=None): - helper = LayerHelper('assign', **locals()) - helper.append_op( - type='scale', - inputs={'X': [input]}, - outputs={'Out': [output]}, - attrs={'scale': 1.0}) - return output - def split_lod_tensor(input, mask, @@ -460,404 +54,6 @@ def merge_lod_tensor(in_true, return out -def cos_sim(X, Y, **kwargs): - """ - This function performs the cosine similarity between two tensors - X and Y and returns that as the output. - """ - helper = LayerHelper('cos_sim', **kwargs) - out = helper.create_tmp_variable(dtype=X.dtype) - xnorm = helper.create_tmp_variable(dtype=X.dtype) - ynorm = helper.create_tmp_variable(dtype=X.dtype) - helper.append_op( - type='cos_sim', - inputs={'X': [X], - 'Y': [Y]}, - outputs={'Out': [out], - 'XNorm': [xnorm], - 'YNorm': [ynorm]}) - return out - - -def cross_entropy(input, label, **kwargs): - """ - This function computes cross_entropy using the input and label. - """ - helper = LayerHelper('cross_entropy', **kwargs) - out = helper.create_tmp_variable(dtype=input.dtype) - helper.append_op( - type='cross_entropy', - inputs={'X': [input], - 'Label': [label]}, - outputs={'Y': [out]}, - attrs=kwargs) - return out - - -def square_error_cost(input, label, **kwargs): - """ - This functions returns the squared error cost using the input and label. - The output is appending the op to do the above. - """ - helper = LayerHelper('square_error_cost', **kwargs) - minus_out = helper.create_tmp_variable(dtype=input.dtype) - helper.append_op( - type='elementwise_sub', - inputs={'X': [input], - 'Y': [label]}, - outputs={'Out': [minus_out]}) - - square_out = helper.create_tmp_variable(dtype=input.dtype) - helper.append_op( - type='square', inputs={'X': [minus_out]}, outputs={'Y': [square_out]}) - return square_out - - -def accuracy(input, label, k=1, correct=None, total=None, **kwargs): - """ - This function computes the accuracy using the input and label. - The output is the top_k inputs and their indices. - """ - helper = LayerHelper("accuracy", **kwargs) - topk_out = helper.create_tmp_variable(dtype=input.dtype) - topk_indices = helper.create_tmp_variable(dtype="int64") - helper.append_op( - type="top_k", - inputs={"X": [input]}, - outputs={"Out": [topk_out], - "Indices": [topk_indices]}, - attrs={"k": k}) - acc_out = helper.create_tmp_variable(dtype="float32") - if correct is None: - correct = helper.create_tmp_variable(dtype="int64") - if total is None: - total = helper.create_tmp_variable(dtype="int64") - helper.append_op( - type="accuracy", - inputs={ - "Out": [topk_out], - "Indices": [topk_indices], - "Label": [label] - }, - outputs={ - "Accuracy": [acc_out], - "Correct": [correct], - "Total": [total], - }) - return acc_out - - -def chunk_eval(input, - label, - chunk_scheme, - num_chunk_types, - excluded_chunk_types=None, - **kwargs): - """ - This function computes the accuracy using the input and label. - The output is the top_k inputs and their indices. - """ - helper = LayerHelper("chunk_eval", **kwargs) - - # prepare output - precision = helper.create_tmp_variable(dtype="float32") - recall = helper.create_tmp_variable(dtype="float32") - f1_score = helper.create_tmp_variable(dtype="float32") - - helper.append_op( - type="chunk_eval", - inputs={"Inference": [input], - "Label": [label]}, - outputs={ - "Precision": [precision], - "Recall": [recall], - "F1-Score": [f1_score] - }, - attrs={ - "num_chunk_types": num_chunk_types, - 'chunk_scheme': chunk_scheme, - 'excluded_chunk_types': excluded_chunk_types or [] - }) - return precision, recall, f1_score - - -def sequence_conv(input, - num_filters, - filter_size=3, - filter_stride=1, - padding=None, - bias_attr=None, - param_attr=None, - act=None, - main_program=None, - startup_program=None): - """ - This function creates the op for sequence_conv, using the inputs and - other convolutional configurations for the filters and stride as given - in the input parameters to the function. - """ - - # FIXME(dzh) : want to unify the argument of python layer - # function. So we ignore some unecessary attributes. - # such as, padding_trainable, context_start. - - helper = LayerHelper('sequence_conv', **locals()) - dtype = helper.input_dtype() - filter_shape = [filter_size * input.shape[1], num_filters] - filter_param = helper.create_parameter( - attr=helper.param_attr, shape=filter_shape, dtype=dtype) - pre_bias = helper.create_tmp_variable(dtype) - - helper.append_op( - type='sequence_conv', - inputs={ - 'X': [input], - 'Filter': [filter_param], - }, - outputs={"Out": pre_bias}, - attrs={ - 'contextStride': filter_stride, - 'contextStart': -int(filter_size / 2), - 'contextLength': filter_size - }) - pre_act = helper.append_bias_op(pre_bias) - return helper.append_activation(pre_act) - - -def conv2d(input, - num_filters, - filter_size, - stride=None, - padding=None, - groups=None, - param_attr=None, - bias_attr=None, - act=None, - name=None, - main_program=None, - startup_program=None): - """ - This function creates the op for a 2-dimensional Convolution. - This is performed using the parameters of filters(size, dimensionality etc) - , stride and other configurations for a Convolution operation. - This funciton can also append an activation on top of the - conv-2d output, if mentioned in the input parameters. - """ - - if stride is None: - stride = [1, 1] - helper = LayerHelper('conv2d', **locals()) - dtype = helper.input_dtype() - - num_channels = input.shape[1] - if groups is None: - num_filter_channels = num_channels - else: - if num_channels % groups != 0: - raise ValueError("num_channels must be divisible by groups.") - num_filter_channels = num_channels / groups - - if isinstance(filter_size, int): - filter_size = [filter_size, filter_size] - if isinstance(stride, int): - stride = [stride, stride] - if isinstance(padding, int): - padding = [padding, padding] - - input_shape = input.shape - filter_shape = [num_filters, num_filter_channels] + filter_size - - def _get_default_param_initializer(): - std = (2.0 / (filter_size[0]**2 * num_channels))**0.5 - return Normal(0.0, std, 0) - - filter_param = helper.create_parameter( - attr=helper.param_attr, - shape=filter_shape, - dtype=dtype, - default_initializer=_get_default_param_initializer()) - - pre_bias = helper.create_tmp_variable(dtype) - - helper.append_op( - type='conv2d_cudnn', - inputs={ - 'Input': input, - 'Filter': filter_param, - }, - outputs={"Output": pre_bias}, - attrs={'strides': stride, - 'paddings': padding, - 'groups': groups}) - - pre_act = helper.append_bias_op(pre_bias, dim_start=1, dim_end=2) - - return helper.append_activation(pre_act) - - -def sequence_pool(input, pool_type, **kwargs): - """ - This function add the operator for sequence pooling. - This is applied on top of the input using pool_type mentioned - in the parameters. - """ - helper = LayerHelper('sequence_pool', input=input, **kwargs) - dtype = helper.input_dtype() - pool_out = helper.create_tmp_variable(dtype) - max_index = helper.create_tmp_variable(dtype) - - helper.append_op( - type="sequence_pool", - inputs={"X": input}, - outputs={"Out": pool_out, - "MaxIndex": max_index}, - attrs={"pooltype": pool_type.upper()}) - - return pool_out - - -def pool2d(input, - pool_size, - pool_type, - pool_stride=None, - pool_padding=None, - global_pooling=False, - main_program=None, - startup_program=None): - """ - This function adds the operator for pooling in 2 dimensions, using the - pooling configurations mentioned in input parameters. - """ - if pool_padding is None: - pool_padding = [0, 0] - if pool_stride is None: - pool_stride = [1, 1] - if pool_type not in ["max", "avg"]: - raise ValueError( - "Unknown pool_type: '%s'. It can only be 'max' or 'avg'.", - str(pool_type)) - if isinstance(pool_size, int): - pool_size = [pool_size, pool_size] - if isinstance(pool_stride, int): - pool_stride = [pool_stride, pool_stride] - if isinstance(pool_padding, int): - pool_padding = [pool_padding, pool_padding] - - helper = LayerHelper('pool2d', **locals()) - dtype = helper.input_dtype() - pool_out = helper.create_tmp_variable(dtype) - - helper.append_op( - type="pool2d", - inputs={"X": input}, - outputs={"Out": pool_out}, - attrs={ - "pooling_type": pool_type, - "ksize": pool_size, - "global_pooling": global_pooling, - "strides": pool_stride, - "paddings": pool_padding - }) - - return pool_out - - -def batch_norm(input, - act=None, - is_test=False, - momentum=0.9, - epsilon=1e-05, - param_attr=None, - bias_attr=None, - data_layout='NCHW', - main_program=None, - startup_program=None): - """ - This function helps create an operator to implement - the BatchNorm layer using the configurations from the input parameters. - """ - helper = LayerHelper('batch_norm', **locals()) - dtype = helper.input_dtype() - - input_shape = input.shape - if data_layout == 'NCHW': - channel_num = input_shape[1] - else: - if data_layout == 'NHWC': - channel_num = input_shape[-1] - else: - raise ValueError("unsupported data layout:" + data_layout) - - param_shape = [channel_num] - - # create parameter - scale = helper.create_parameter( - attr=helper.param_attr, - shape=param_shape, - dtype=dtype, - default_initializer=Constant(1.0)) - - bias = helper.create_parameter( - attr=helper.param_attr, shape=param_shape, dtype=dtype, is_bias=True) - - mean = helper.create_global_variable( - dtype=input.dtype, shape=param_shape, persistable=True) - helper.set_variable_initializer(var=mean, initializer=Constant(0.0)) - - variance = helper.create_global_variable( - dtype=input.dtype, shape=param_shape, persistable=True) - helper.set_variable_initializer(var=variance, initializer=Constant(1.0)) - - # create output - # mean and mean_out share the same memory - mean_out = mean - # variance and variance out share the same memory - variance_out = variance - saved_mean = helper.create_tmp_variable(dtype) - saved_variance = helper.create_tmp_variable(dtype) - - batch_norm_out = helper.create_tmp_variable(dtype) - - helper.append_op( - type="batch_norm", - inputs={ - "X": input, - "Scale": scale, - "Bias": bias, - "Mean": mean, - "Variance": variance - }, - outputs={ - "Y": batch_norm_out, - "MeanOut": mean_out, - "VarianceOut": variance_out, - "SavedMean": saved_mean, - "SavedVariance": saved_variance - }, - attrs={"momentum": momentum, - "epsilon": epsilon, - "is_test": is_test}) - - return helper.append_activation(batch_norm_out) - - -def beam_search_decode(ids, scores, main_program=None, startup_program=None): - helper = LayerHelper('beam_search_decode', **locals()) - sentence_ids = helper.create_tmp_variable(dtype=ids.dtype) - sentence_scores = helper.create_tmp_variable(dtype=ids.dtype) - - helper.append_op( - type="beam_search_decode", - inputs={"Ids": ids, - "Scores": scores}, - outputs={ - "SentenceIds": sentence_ids, - "SentenceScores": sentence_scores - }) - - return sentence_ids, sentence_scores - - class BlockGuard(object): """ BlockGuard class. @@ -1130,7 +326,7 @@ class StaticRNN(object): attrs={ 'ex_states': pre_memories, 'states': memories, - 'step_block': rnn_block + 'sub_block': rnn_block }) @@ -1207,51 +403,7 @@ class While(object): }, outputs={'Out': out_vars, 'StepScopes': [step_scope]}, - attrs={'step_block': while_block}) - - -def lstm(x, - c_pre_init, - hidden_dim, - forget_bias=None, - main_program=None, - startup_program=None): - """ - This function helps create an operator for the LSTM (Long Short Term - Memory) cell that can be used inside an RNN. - """ - helper = LayerHelper('lstm_unit', **locals()) - rnn = StaticRNN() - with rnn.step(): - c_pre = rnn.memory(init=c_pre_init) - x_t = rnn.step_input(x) - - before_fc = concat( - input=[x_t, c_pre], - axis=1, - main_program=main_program, - startup_program=startup_program) - after_fc = fc(input=before_fc, - size=hidden_dim * 4, - main_program=main_program, - startup_program=startup_program) - - dtype = x.dtype - c = helper.create_tmp_variable(dtype) - h = helper.create_tmp_variable(dtype) - - helper.append_op( - type='lstm_unit', - inputs={"X": after_fc, - "C_prev": c_pre}, - outputs={"C": c, - "H": h}, - attrs={"forget_bias": forget_bias}) - - rnn.update_memory(c_pre, c) - rnn.output(h) - - return rnn() + attrs={'sub_block': while_block}) def lod_rank_table(x, level=0, main_program=None): @@ -1331,72 +483,6 @@ def array_to_lod_tensor(x, table, main_program=None, startup_program=None): return tmp -def fill_constant(shape, - dtype, - value, - out=None, - main_program=None, - startup_program=None): - """ - This function creates a tensor , with shape as mentioned in the input and - specified dtype and fills this up with a constant value that - comes in the input. It also sets the stop_gradient to be True. - """ - helper = LayerHelper("fill_constant", **locals()) - if out is None: - out = helper.create_tmp_variable(dtype=dtype) - helper.append_op( - type='fill_constant', - inputs={}, - outputs={'Out': [out]}, - attrs={'shape': shape, - 'dtype': out.dtype, - 'value': float(value)}) - out.stop_gradient = True - return out - - -def fill_constant_batch_size_like(input, - shape, - dtype, - value, - input_dim_idx=0, - output_dim_idx=0, - main_program=None, - startup_program=None): - helper = LayerHelper("fill_constant_batch_size_like", **locals()) - out = helper.create_tmp_variable(dtype=dtype) - helper.append_op( - type='fill_constant_batch_size_like', - inputs={'Input': input}, - outputs={'Out': [out]}, - attrs={ - 'shape': shape, - 'dtype': out.dtype, - 'value': float(value), - 'input_dim_idx': input_dim_idx, - 'output_dim_idx': output_dim_idx - }) - out.stop_gradient = True - return out - - -def ones(shape, dtype, main_program=None): - """ - This function performs the same function as fill_constant() declared above - with the constant value being 1.0. - """ - return fill_constant(value=1.0, **locals()) - - -def zeros(shape, dtype, main_program=None): - """ - This function performs the same function as fill_constant() declared above - with the constant value being 0.0. - """ - return fill_constant(value=0.0, **locals()) - - def increment(x, value=1.0, in_place=True, @@ -1508,95 +594,6 @@ def array_length(array, main_program=None): return tmp -def conv2d_transpose(input, - num_filters, - output_size=None, - filter_size=None, - padding=None, - stride=None, - param_attr=None, - main_program=None, - startup_program=None): - """ - The transpose of conv2d layer. - - This layer is also known as deconvolution layer. - - Args: - input(Variable): The input image with [N, C, H, W] format. - num_filters(int): The number of filter. It is as same as the output - image channel. - output_size(int|tuple|None): The output image size. If output size is a - tuple, it must contain two integers, (image_H, image_W). This - parameter only works when filter_size is None. - filter_size(int|tuple|None): The filter size. If filter_size is a tuple, - it must contain two integers, (filter_size_H, filter_size_W). - Otherwise, the filter will be a square. None if use output size to - calculate filter_size - padding(int|tuple): The padding size. If padding is a tuple, it must - contain two integers, (padding_H, padding_W). Otherwise, the - padding_H = padding_W = padding. - stride(int|tuple): The stride size. If stride is a tuple, it must - contain two integers, (stride_H, stride_W). Otherwise, the - stride_H = stride_W = stride. - param_attr: Parameter Attribute. - main_program(Program): the main program - startup_program(Program): the startup program - - Returns: - Variable: Output image. - """ - helper = LayerHelper("conv2d_transpose", **locals()) - if not isinstance(input, Variable): - raise TypeError("Input of conv2d_transpose must be Variable") - input_channel = input.shape[1] - - op_attr = dict() - - if isinstance(padding, int): - op_attr['paddings'] = [padding, padding] - elif padding is not None: - op_attr['paddings'] = padding - - if isinstance(stride, int): - op_attr['strides'] = stride - elif stride is not None: - op_attr['strides'] = stride - - if filter_size is None: - if output_size is None: - raise ValueError("output_size must be set when filter_size is None") - if isinstance(output_size, int): - output_size = [output_size, output_size] - - padding = op_attr.get('paddings', [0, 0]) - stride = op_attr.get('strides', [1, 1]) - - h_in = input.shape[2] - w_in = input.shape[3] - filter_size_h = output_size[0] - \ - (h_in - 1) * stride[0] + 2 * padding[0] - filter_size_w = output_size[1] - \ - (w_in - 1) * stride[1] + 2 * padding[1] - filter_size = [filter_size_h, filter_size_w] - elif isinstance(filter_size, int): - filter_size = [filter_size, filter_size] - - filter_shape = [input_channel, num_filters] + filter_size - img_filter = helper.create_parameter( - dtype=input.dtype, shape=filter_shape, attr=helper.param_attr) - - out = helper.create_tmp_variable(dtype=input.dtype) - helper.append_op( - type='conv2d_transpose', - inputs={'Input': [input], - 'Filter': [img_filter]}, - outputs={'Output': out}, - attrs=op_attr) - - return out - - class ConditionalBlockGuard(BlockGuard): def __init__(self, block): if not isinstance(block, ConditionalBlock): @@ -1671,7 +668,7 @@ class ConditionalBlock(object): }, outputs={'Out': out_list, 'Scope': [step_scope]}, - attrs={'block': inside_block}) + attrs={'sub_block': inside_block}) class IfElseBlockGuard(object): @@ -2023,70 +1020,3 @@ class DynamicRNN(object): if self.status != DynamicRNN.IN_RNN: raise ValueError("{0} can only be invoked inside rnn block.".format( method)) - - -def seq_expand(x, y, main_program=None, startup_program=None): - """Sequence Expand Layer. This layer will expand the input variable **x** - according to LoD information of **y**. And the following examples will - explain how seq_expand works: - - .. code-block:: text - - * Case 1 - x is a LoDTensor: - x.lod = [[0, 2, 3], - [0, 1, 3, 4]] - x.data = [a, b, c, d] - x.dims = [4, 1] - - y is a LoDTensor: - y.lod = [[0, 2, 4], - [0, 3, 6, 7, 8]] - - with condition len(y.lod[-1]) - 1 == x.dims[0] - - then output is a 2-level LoDTensor: - out.lod = [[0, 2, 4], - [0, 3, 6, 7, 8]] - out.data = [a, a, a, b, b, b, c, d] - out.dims = [8, 1] - - * Case 2 - x is a Tensor: - x.data = [a, b, c] - x.dims = [3, 1] - - y is a LoDTensor: - Y.lod = [[0, 2, 3, 6]] - - with condition len(y.lod[-1]) - 1 == x.dims[0] - - then output is a 1-level LoDTensor: - out.lod = [[0, 2, 3, 6]] - out.data = [a, a, b, c, c, c] - out.dims = [6, 1] - - Args: - x (Variable): The input variable which is a Tensor or LoDTensor. - y (Variable): The input variable which is a LoDTensor. - main_program (Program): The main program. - startup_program (Program): The startup program. - - Returns: - Variable: The expanded variable which is a LoDTensor. - - Examples: - .. code-block:: python - - x = fluid.layers.data(name='x', shape=[10], dtype='float32') - y = fluid.layers.data(name='y', shape=[10, 20], - dtype='float32', lod_level=1) - out = layers.seq_expand(x=x, y=y) - """ - helper = LayerHelper('seq_expand', input=x, **locals()) - dtype = helper.input_dtype() - tmp = helper.create_tmp_variable(dtype) - helper.append_op( - type='seq_expand', inputs={'X': x, - 'Y': y}, outputs={'Out': tmp}) - return tmp diff --git a/python/paddle/v2/fluid/layers/io.py b/python/paddle/v2/fluid/layers/io.py new file mode 100644 index 0000000000000000000000000000000000000000..f03d8e3c3e8797619adf837b28ed66ece7db295e --- /dev/null +++ b/python/paddle/v2/fluid/layers/io.py @@ -0,0 +1,57 @@ +from .. import core +from ..layer_helper import LayerHelper + +__all__ = ['data'] + + +def data(name, + shape, + append_batch_size=True, + dtype='float32', + lod_level=0, + type=core.VarDesc.VarType.LOD_TENSOR, + main_program=None, + startup_program=None, + stop_gradient=True): + """ + Data Layer. + + Args: + name: The name/alias of the function + shape: Tuple declaring the shape. + append_batch_size: Whether or not to append the data as a batch. + dtype: The type of data : float32, float_16, int etc + type: The output type. By default it is LOD_TENSOR. + lod_level(int): The LoD Level. 0 means the input data is not a sequence. + main_program: Name of the main program that calls this + startup_program: Name of the startup program + stop_gradient: A boolean that mentions whether gradient should flow. + + This function takes in input and based on whether data has + to be returned back as a minibatch, it creates the global variable using + the helper functions. The global variables can be accessed by all the + following operations and layers in the graph. + + All the input variables of this function are passed in as local variables + to the LayerHelper constructor. + + """ + helper = LayerHelper('data', **locals()) + shape = list(shape) + for i in xrange(len(shape)): + if shape[i] is None: + shape[i] = -1 + append_batch_size = False + elif shape[i] < 0: + append_batch_size = False + + if append_batch_size: + shape = [-1] + shape # append batch size as -1 + + return helper.create_global_variable( + name=name, + shape=shape, + dtype=dtype, + type=type, + stop_gradient=stop_gradient, + lod_level=lod_level) diff --git a/python/paddle/v2/fluid/layers/nn.py b/python/paddle/v2/fluid/layers/nn.py new file mode 100644 index 0000000000000000000000000000000000000000..cdc77d06047a18a774dbafd1fa76085fdc0512df --- /dev/null +++ b/python/paddle/v2/fluid/layers/nn.py @@ -0,0 +1,858 @@ +""" +All layers just related to the neural network. +""" + +from ..layer_helper import LayerHelper +from ..initializer import Normal, Constant +from ..framework import Variable + +__all__ = [ + 'fc', 'embedding', 'dynamic_lstm', 'gru_unit', 'linear_chain_crf', + 'crf_decoding', 'cos_sim', 'cross_entropy', 'square_error_cost', 'accuracy', + 'chunk_eval', 'sequence_conv', 'conv2d', 'sequence_pool', 'pool2d', + 'batch_norm', 'beam_search_decode', 'conv2d_transpose' +] + + +def fc(input, + size, + num_flatten_dims=1, + param_attr=None, + bias_attr=None, + act=None, + name=None, + main_program=None, + startup_program=None): + """ + Fully Connected Layer. + + Args: + input: The input tensor to the function + size: The size of the layer + num_flatten_dims: Number of columns in input + param_attr: The parameters/weights to the FC Layer + param_initializer: Initializer used for the weight/parameter. If None, XavierInitializer() is used + bias_attr: The bias parameter for the FC layer + bias_initializer: Initializer used for the bias. If None, then ConstantInitializer() is used + act: Activation to be applied to the output of FC layer + name: Name/alias of the function + main_program: Name of the main program that calls this + startup_program: Name of the startup program + + This function can take in multiple inputs and performs the Fully Connected + function (linear transformation) on top of each of them. + So for input x, the output will be : Wx + b. Where W is the parameter, + b the bias and x is the input. + + The function also applies an activation (non-linearity) on top of the + output, if activation is passed in the input. + + All the input variables of this function are passed in as local variables + to the LayerHelper constructor. + + """ + helper = LayerHelper('fc', **locals()) + + dtype = helper.input_dtype() + + mul_results = [] + for input_var, param_attr in helper.iter_inputs_and_params(): + input_shape = input_var.shape + param_shape = [ + reduce(lambda a, b: a * b, input_shape[num_flatten_dims:], 1) + ] + [size] + w = helper.create_parameter( + attr=param_attr, shape=param_shape, dtype=dtype, is_bias=False) + tmp = helper.create_tmp_variable(dtype) + helper.append_op( + type="mul", + inputs={ + "X": input_var, + "Y": w, + }, + outputs={"Out": tmp}, + attrs={'x_num_col_dims': num_flatten_dims, + 'y_num_col_dims': 1}) + mul_results.append(tmp) + + # sum + if len(mul_results) == 1: + pre_bias = mul_results[0] + else: + pre_bias = helper.create_tmp_variable(dtype) + helper.append_op( + type="sum", inputs={"X": mul_results}, outputs={"Out": pre_bias}) + # add bias + pre_activation = helper.append_bias_op(pre_bias) + # add activation + return helper.append_activation(pre_activation) + + +def embedding(input, + size, + is_sparse=False, + param_attr=None, + dtype='float32', + main_program=None, + startup_program=None): + """ + Embedding Layer. + + Args: + param_initializer: + input: The input to the function + size: The size of the layer + is_sparse: A flag that decleares whether the input is sparse + param_attr: Parameters for this layer + dtype: The type of data : float32, float_16, int etc + main_program: Name of the main program that calls this + startup_program: Name of the startup program + + This function can take in the input (which is a vector of IDs) and + performs a lookup in the lookup_table using these IDs, to result into + the embedding of each ID in the input. + + All the input variables of this function are passed in as local variables + to the LayerHelper constructor. + + """ + + helper = LayerHelper('embedding', **locals()) + w = helper.create_parameter( + attr=helper.param_attr, shape=size, dtype=dtype, is_bias=False) + tmp = helper.create_tmp_variable(dtype) + helper.append_op( + type='lookup_table', + inputs={'Ids': input, + 'W': w}, + outputs={'Out': tmp}, + attrs={'is_sparse': is_sparse}) + return tmp + + +# TODO(qijun): expose H0 and C0 +def dynamic_lstm(input, + size, + param_attr=None, + bias_attr=None, + use_peepholes=True, + is_reverse=False, + gate_activation='sigmoid', + cell_activation='tanh', + candidate_activation='tanh', + dtype='float32', + main_program=None, + startup_program=None): + helper = LayerHelper('lstm', **locals()) + size = size / 4 + weight = helper.create_parameter( + attr=helper.param_attr, shape=[size, 4 * size], dtype=dtype) + bias_size = [1, 7 * size] + if not use_peepholes: + bias_size[1] = 4 * size + bias = helper.create_parameter( + attr=helper.bias_attr, shape=bias_size, dtype=dtype, is_bias=True) + + hidden = helper.create_tmp_variable(dtype) + cell = helper.create_tmp_variable(dtype) + batch_gate = helper.create_tmp_variable(dtype) + batch_cell_pre_act = helper.create_tmp_variable(dtype) + + helper.append_op( + type='lstm', + inputs={'Input': input, + 'Weight': weight, + 'Bias': bias}, + outputs={ + 'Hidden': hidden, + 'Cell': cell, + 'BatchGate': batch_gate, + 'BatchCellPreAct': batch_cell_pre_act + }, + attrs={ + 'use_peepholes': use_peepholes, + 'is_reverse': is_reverse, + 'gate_activation': gate_activation, + 'cell_activation': cell_activation, + 'candidate_activation': candidate_activation + }) + return hidden, cell + + +def gru_unit(input, + hidden, + size, + weight=None, + bias=None, + activation='tanh', + gate_activation='sigmoid', + main_program=None, + startup_program=None): + """ + GRUUnit Operator implements partial calculations of the GRU unit as following: + + $$ + update \ gate: u_t = actGate(xu_t + W_u * h_{t-1} + b_u) \\ + reset \ gate: r_t = actGate(xr_t + W_r * h_{t-1} + b_r) \\ + output \ candidate: {h}_t = actNode(xc_t + W_c * dot(r_t, h_{t-1}) + b_c) \\ + output: h_t = dot((1 - u_t), h_{t-1}) + dot(u_t, {h}_t) + $$ + + which is same as one time step of GRU Operator. + + @note To implement the complete GRU unit, fully-connected operator must be + used before to feed xu, xr and xc as the Input of GRUUnit operator. + + TODO(ChunweiYan) add more document here + """ + activation_dict = dict( + identity=0, + sigmoid=1, + tanh=2, + relu=3, ) + activation = activation_dict[activation] + gate_activation = activation_dict[gate_activation] + + helper = LayerHelper('gru_unit', **locals()) + dtype = helper.input_dtype() + size = size / 3 + + # create weight + if weight is None: + weight = helper.create_parameter( + attr=helper.param_attr, shape=[size, 3 * size], dtype=dtype) + + # create bias + if bias is None: + bias_size = [1, 3 * size] + bias = helper.create_parameter( + attr=helper.bias_attr, shape=bias_size, dtype=dtype, is_bias=True) + + gate = helper.create_tmp_variable(dtype) + reset_hidden_pre = helper.create_tmp_variable(dtype) + updated_hidden = helper.create_tmp_variable(dtype) + + helper.append_op( + type='gru_unit', + inputs={'Input': input, + 'HiddenPrev': hidden, + 'Weight': weight}, + outputs={ + 'Gate': gate, + 'ResetHiddenPrev': reset_hidden_pre, + 'Hidden': updated_hidden, + }, + attrs={ + 'activation': 0, + 'gate_activation': 1, + }) + + return updated_hidden, reset_hidden_pre, gate + + +def linear_chain_crf(input, + label, + param_attr=None, + main_program=None, + startup_program=None): + helper = LayerHelper('linear_chain_crf', **locals()) + size = input.shape[1] + transition = helper.create_parameter( + attr=helper.param_attr, + shape=[size + 2, size], + dtype=helper.input_dtype()) + alpha = helper.create_tmp_variable(dtype=helper.input_dtype()) + emission_exps = helper.create_tmp_variable(dtype=helper.input_dtype()) + transition_exps = helper.create_tmp_variable(dtype=helper.input_dtype()) + log_likelihood = helper.create_tmp_variable(dtype=helper.input_dtype()) + helper.append_op( + type='linear_chain_crf', + inputs={"Emission": [input], + "Transition": transition, + "Label": label}, + outputs={ + "Alpha": [alpha], + "EmissionExps": [emission_exps], + "TransitionExps": transition_exps, + "LogLikelihood": log_likelihood + }) + + return log_likelihood + + +def crf_decoding(input, + param_attr, + label=None, + main_program=None, + startup_program=None): + helper = LayerHelper('crf_decoding', **locals()) + transition = helper.get_parameter(param_attr.name) + viterbi_path = helper.create_tmp_variable(dtype=helper.input_dtype()) + helper.append_op( + type='crf_decoding', + inputs={"Emission": [input], + "Transition": transition, + "Label": label}, + outputs={"ViterbiPath": [viterbi_path]}) + + return viterbi_path + + +def cos_sim(X, Y, **kwargs): + """ + This function performs the cosine similarity between two tensors + X and Y and returns that as the output. + """ + helper = LayerHelper('cos_sim', **kwargs) + out = helper.create_tmp_variable(dtype=X.dtype) + xnorm = helper.create_tmp_variable(dtype=X.dtype) + ynorm = helper.create_tmp_variable(dtype=X.dtype) + helper.append_op( + type='cos_sim', + inputs={'X': [X], + 'Y': [Y]}, + outputs={'Out': [out], + 'XNorm': [xnorm], + 'YNorm': [ynorm]}) + return out + + +def cross_entropy(input, label, **kwargs): + """ + This function computes cross_entropy using the input and label. + """ + helper = LayerHelper('cross_entropy', **kwargs) + out = helper.create_tmp_variable(dtype=input.dtype) + helper.append_op( + type='cross_entropy', + inputs={'X': [input], + 'Label': [label]}, + outputs={'Y': [out]}, + attrs=kwargs) + return out + + +def square_error_cost(input, label, **kwargs): + """ + This functions returns the squared error cost using the input and label. + The output is appending the op to do the above. + """ + helper = LayerHelper('square_error_cost', **kwargs) + minus_out = helper.create_tmp_variable(dtype=input.dtype) + helper.append_op( + type='elementwise_sub', + inputs={'X': [input], + 'Y': [label]}, + outputs={'Out': [minus_out]}) + + square_out = helper.create_tmp_variable(dtype=input.dtype) + helper.append_op( + type='square', inputs={'X': [minus_out]}, outputs={'Y': [square_out]}) + return square_out + + +def accuracy(input, label, k=1, correct=None, total=None, **kwargs): + """ + This function computes the accuracy using the input and label. + The output is the top_k inputs and their indices. + """ + helper = LayerHelper("accuracy", **kwargs) + topk_out = helper.create_tmp_variable(dtype=input.dtype) + topk_indices = helper.create_tmp_variable(dtype="int64") + helper.append_op( + type="top_k", + inputs={"X": [input]}, + outputs={"Out": [topk_out], + "Indices": [topk_indices]}, + attrs={"k": k}) + acc_out = helper.create_tmp_variable(dtype="float32") + if correct is None: + correct = helper.create_tmp_variable(dtype="int64") + if total is None: + total = helper.create_tmp_variable(dtype="int64") + helper.append_op( + type="accuracy", + inputs={ + "Out": [topk_out], + "Indices": [topk_indices], + "Label": [label] + }, + outputs={ + "Accuracy": [acc_out], + "Correct": [correct], + "Total": [total], + }) + return acc_out + + +def chunk_eval(input, + label, + chunk_scheme, + num_chunk_types, + excluded_chunk_types=None, + **kwargs): + """ + This function computes and outputs the precision, recall and + F1-score of chunk detection. + """ + helper = LayerHelper("chunk_eval", **kwargs) + + # prepare output + precision = helper.create_tmp_variable(dtype="float32") + recall = helper.create_tmp_variable(dtype="float32") + f1_score = helper.create_tmp_variable(dtype="float32") + num_infer_chunks = helper.create_tmp_variable(dtype="int64") + num_label_chunks = helper.create_tmp_variable(dtype="int64") + num_correct_chunks = helper.create_tmp_variable(dtype="int64") + + helper.append_op( + type="chunk_eval", + inputs={"Inference": [input], + "Label": [label]}, + outputs={ + "Precision": [precision], + "Recall": [recall], + "F1-Score": [f1_score], + "NumInferChunks": [num_infer_chunks], + "NumLabelChunks": [num_label_chunks], + "NumCorrectChunks": [num_correct_chunks] + }, + attrs={ + "num_chunk_types": num_chunk_types, + 'chunk_scheme': chunk_scheme, + 'excluded_chunk_types': excluded_chunk_types or [] + }) + return precision, recall, f1_score, num_infer_chunks, num_label_chunks, num_correct_chunks + + +def sequence_conv(input, + num_filters, + filter_size=3, + filter_stride=1, + padding=None, + bias_attr=None, + param_attr=None, + act=None, + main_program=None, + startup_program=None): + """ + This function creates the op for sequence_conv, using the inputs and + other convolutional configurations for the filters and stride as given + in the input parameters to the function. + """ + + # FIXME(dzh) : want to unify the argument of python layer + # function. So we ignore some unecessary attributes. + # such as, padding_trainable, context_start. + + helper = LayerHelper('sequence_conv', **locals()) + dtype = helper.input_dtype() + filter_shape = [filter_size * input.shape[1], num_filters] + filter_param = helper.create_parameter( + attr=helper.param_attr, shape=filter_shape, dtype=dtype) + pre_bias = helper.create_tmp_variable(dtype) + + helper.append_op( + type='sequence_conv', + inputs={ + 'X': [input], + 'Filter': [filter_param], + }, + outputs={"Out": pre_bias}, + attrs={ + 'contextStride': filter_stride, + 'contextStart': -int(filter_size / 2), + 'contextLength': filter_size + }) + pre_act = helper.append_bias_op(pre_bias) + return helper.append_activation(pre_act) + + +def conv2d(input, + num_filters, + filter_size, + stride=None, + padding=None, + groups=None, + param_attr=None, + bias_attr=None, + act=None, + name=None, + main_program=None, + startup_program=None): + """ + This function creates the op for a 2-dimensional Convolution. + This is performed using the parameters of filters(size, dimensionality etc) + , stride and other configurations for a Convolution operation. + This funciton can also append an activation on top of the + conv-2d output, if mentioned in the input parameters. + """ + + if stride is None: + stride = [1, 1] + helper = LayerHelper('conv2d', **locals()) + dtype = helper.input_dtype() + + num_channels = input.shape[1] + if groups is None: + num_filter_channels = num_channels + else: + if num_channels % groups != 0: + raise ValueError("num_channels must be divisible by groups.") + num_filter_channels = num_channels / groups + + if isinstance(filter_size, int): + filter_size = [filter_size, filter_size] + if isinstance(stride, int): + stride = [stride, stride] + if isinstance(padding, int): + padding = [padding, padding] + + input_shape = input.shape + filter_shape = [num_filters, num_filter_channels] + filter_size + + def _get_default_param_initializer(): + std = (2.0 / (filter_size[0]**2 * num_channels))**0.5 + return Normal(0.0, std, 0) + + filter_param = helper.create_parameter( + attr=helper.param_attr, + shape=filter_shape, + dtype=dtype, + default_initializer=_get_default_param_initializer()) + + pre_bias = helper.create_tmp_variable(dtype) + + helper.append_op( + type='conv2d_cudnn', + inputs={ + 'Input': input, + 'Filter': filter_param, + }, + outputs={"Output": pre_bias}, + attrs={'strides': stride, + 'paddings': padding, + 'groups': groups}) + + pre_act = helper.append_bias_op(pre_bias, dim_start=1, dim_end=2) + + return helper.append_activation(pre_act) + + +def sequence_pool(input, pool_type, **kwargs): + """ + This function add the operator for sequence pooling. + This is applied on top of the input using pool_type mentioned + in the parameters. + """ + helper = LayerHelper('sequence_pool', input=input, **kwargs) + dtype = helper.input_dtype() + pool_out = helper.create_tmp_variable(dtype) + max_index = helper.create_tmp_variable(dtype) + + helper.append_op( + type="sequence_pool", + inputs={"X": input}, + outputs={"Out": pool_out, + "MaxIndex": max_index}, + attrs={"pooltype": pool_type.upper()}) + + return pool_out + + +def pool2d(input, + pool_size, + pool_type, + pool_stride=None, + pool_padding=None, + global_pooling=False, + main_program=None, + startup_program=None): + """ + This function adds the operator for pooling in 2 dimensions, using the + pooling configurations mentioned in input parameters. + """ + if pool_padding is None: + pool_padding = [0, 0] + if pool_stride is None: + pool_stride = [1, 1] + if pool_type not in ["max", "avg"]: + raise ValueError( + "Unknown pool_type: '%s'. It can only be 'max' or 'avg'.", + str(pool_type)) + if isinstance(pool_size, int): + pool_size = [pool_size, pool_size] + if isinstance(pool_stride, int): + pool_stride = [pool_stride, pool_stride] + if isinstance(pool_padding, int): + pool_padding = [pool_padding, pool_padding] + + helper = LayerHelper('pool2d', **locals()) + dtype = helper.input_dtype() + pool_out = helper.create_tmp_variable(dtype) + + helper.append_op( + type="pool2d", + inputs={"X": input}, + outputs={"Out": pool_out}, + attrs={ + "pooling_type": pool_type, + "ksize": pool_size, + "global_pooling": global_pooling, + "strides": pool_stride, + "paddings": pool_padding + }) + + return pool_out + + +def batch_norm(input, + act=None, + is_test=False, + momentum=0.9, + epsilon=1e-05, + param_attr=None, + bias_attr=None, + data_layout='NCHW', + main_program=None, + startup_program=None): + """ + This function helps create an operator to implement + the BatchNorm layer using the configurations from the input parameters. + """ + helper = LayerHelper('batch_norm', **locals()) + dtype = helper.input_dtype() + + input_shape = input.shape + if data_layout == 'NCHW': + channel_num = input_shape[1] + else: + if data_layout == 'NHWC': + channel_num = input_shape[-1] + else: + raise ValueError("unsupported data layout:" + data_layout) + + param_shape = [channel_num] + + # create parameter + scale = helper.create_parameter( + attr=helper.param_attr, + shape=param_shape, + dtype=dtype, + default_initializer=Constant(1.0)) + + bias = helper.create_parameter( + attr=helper.param_attr, shape=param_shape, dtype=dtype, is_bias=True) + + mean = helper.create_global_variable( + dtype=input.dtype, shape=param_shape, persistable=True) + helper.set_variable_initializer(var=mean, initializer=Constant(0.0)) + + variance = helper.create_global_variable( + dtype=input.dtype, shape=param_shape, persistable=True) + helper.set_variable_initializer(var=variance, initializer=Constant(1.0)) + + # create output + # mean and mean_out share the same memory + mean_out = mean + # variance and variance out share the same memory + variance_out = variance + saved_mean = helper.create_tmp_variable(dtype) + saved_variance = helper.create_tmp_variable(dtype) + + batch_norm_out = helper.create_tmp_variable(dtype) + + helper.append_op( + type="batch_norm", + inputs={ + "X": input, + "Scale": scale, + "Bias": bias, + "Mean": mean, + "Variance": variance + }, + outputs={ + "Y": batch_norm_out, + "MeanOut": mean_out, + "VarianceOut": variance_out, + "SavedMean": saved_mean, + "SavedVariance": saved_variance + }, + attrs={"momentum": momentum, + "epsilon": epsilon, + "is_test": is_test}) + + return helper.append_activation(batch_norm_out) + + +def beam_search_decode(ids, scores, main_program=None, startup_program=None): + helper = LayerHelper('beam_search_decode', **locals()) + sentence_ids = helper.create_tmp_variable(dtype=ids.dtype) + sentence_scores = helper.create_tmp_variable(dtype=ids.dtype) + + helper.append_op( + type="beam_search_decode", + inputs={"Ids": ids, + "Scores": scores}, + outputs={ + "SentenceIds": sentence_ids, + "SentenceScores": sentence_scores + }) + + return sentence_ids, sentence_scores + + +def conv2d_transpose(input, + num_filters, + output_size=None, + filter_size=None, + padding=None, + stride=None, + param_attr=None, + main_program=None, + startup_program=None): + """ + The transpose of conv2d layer. + + This layer is also known as deconvolution layer. + + Args: + input(Variable): The input image with [N, C, H, W] format. + num_filters(int): The number of filter. It is as same as the output + image channel. + output_size(int|tuple|None): The output image size. If output size is a + tuple, it must contain two integers, (image_H, image_W). This + parameter only works when filter_size is None. + filter_size(int|tuple|None): The filter size. If filter_size is a tuple, + it must contain two integers, (filter_size_H, filter_size_W). + Otherwise, the filter will be a square. None if use output size to + calculate filter_size + padding(int|tuple): The padding size. If padding is a tuple, it must + contain two integers, (padding_H, padding_W). Otherwise, the + padding_H = padding_W = padding. + stride(int|tuple): The stride size. If stride is a tuple, it must + contain two integers, (stride_H, stride_W). Otherwise, the + stride_H = stride_W = stride. + param_attr: Parameter Attribute. + main_program(Program): the main program + startup_program(Program): the startup program + + Returns: + Variable: Output image. + """ + helper = LayerHelper("conv2d_transpose", **locals()) + if not isinstance(input, Variable): + raise TypeError("Input of conv2d_transpose must be Variable") + input_channel = input.shape[1] + + op_attr = dict() + + if isinstance(padding, int): + op_attr['paddings'] = [padding, padding] + elif padding is not None: + op_attr['paddings'] = padding + + if isinstance(stride, int): + op_attr['strides'] = stride + elif stride is not None: + op_attr['strides'] = stride + + if filter_size is None: + if output_size is None: + raise ValueError("output_size must be set when filter_size is None") + if isinstance(output_size, int): + output_size = [output_size, output_size] + + padding = op_attr.get('paddings', [0, 0]) + stride = op_attr.get('strides', [1, 1]) + + h_in = input.shape[2] + w_in = input.shape[3] + filter_size_h = output_size[0] - \ + (h_in - 1) * stride[0] + 2 * padding[0] + filter_size_w = output_size[1] - \ + (w_in - 1) * stride[1] + 2 * padding[1] + filter_size = [filter_size_h, filter_size_w] + elif isinstance(filter_size, int): + filter_size = [filter_size, filter_size] + + filter_shape = [input_channel, num_filters] + filter_size + img_filter = helper.create_parameter( + dtype=input.dtype, shape=filter_shape, attr=helper.param_attr) + + out = helper.create_tmp_variable(dtype=input.dtype) + helper.append_op( + type='conv2d_transpose', + inputs={'Input': [input], + 'Filter': [img_filter]}, + outputs={'Output': out}, + attrs=op_attr) + + return out + + +def seq_expand(x, y, main_program=None, startup_program=None): + """Sequence Expand Layer. This layer will expand the input variable **x** + according to LoD information of **y**. And the following examples will + explain how seq_expand works: + + .. code-block:: text + + * Case 1 + x is a LoDTensor: + x.lod = [[0, 2, 3], + [0, 1, 3, 4]] + x.data = [a, b, c, d] + x.dims = [4, 1] + + y is a LoDTensor: + y.lod = [[0, 2, 4], + [0, 3, 6, 7, 8]] + + with condition len(y.lod[-1]) - 1 == x.dims[0] + + then output is a 2-level LoDTensor: + out.lod = [[0, 2, 4], + [0, 3, 6, 7, 8]] + out.data = [a, a, a, b, b, b, c, d] + out.dims = [8, 1] + + * Case 2 + x is a Tensor: + x.data = [a, b, c] + x.dims = [3, 1] + + y is a LoDTensor: + Y.lod = [[0, 2, 3, 6]] + + with condition len(y.lod[-1]) - 1 == x.dims[0] + + then output is a 1-level LoDTensor: + out.lod = [[0, 2, 3, 6]] + out.data = [a, a, b, c, c, c] + out.dims = [6, 1] + + Args: + x (Variable): The input variable which is a Tensor or LoDTensor. + y (Variable): The input variable which is a LoDTensor. + main_program (Program): The main program. + startup_program (Program): The startup program. + + Returns: + Variable: The expanded variable which is a LoDTensor. + + Examples: + .. code-block:: python + + x = fluid.layers.data(name='x', shape=[10], dtype='float32') + y = fluid.layers.data(name='y', shape=[10, 20], + dtype='float32', lod_level=1) + out = layers.seq_expand(x=x, y=y) + """ + helper = LayerHelper('seq_expand', input=x, **locals()) + dtype = helper.input_dtype() + tmp = helper.create_tmp_variable(dtype) + helper.append_op( + type='seq_expand', inputs={'X': x, + 'Y': y}, outputs={'Out': tmp}) + return tmp diff --git a/python/paddle/v2/fluid/layers/ops.py b/python/paddle/v2/fluid/layers/ops.py new file mode 100644 index 0000000000000000000000000000000000000000..fa312ace60390e5fdd9637dccc71ccf8b247ca47 --- /dev/null +++ b/python/paddle/v2/fluid/layers/ops.py @@ -0,0 +1,9 @@ +from ..registry import register_layer +__all__ = [ + 'mean', 'mul', 'dropout', 'reshape', 'sigmoid', 'scale', 'transpose', + 'sigmoid_cross_entropy_with_logits', 'elementwise_add', 'elementwise_div', + 'elementwise_sub', 'elementwise_mul', 'clip', 'abs' +] + +for _OP in set(__all__): + globals()[_OP] = register_layer(_OP) diff --git a/python/paddle/v2/fluid/layers/tensor.py b/python/paddle/v2/fluid/layers/tensor.py new file mode 100644 index 0000000000000000000000000000000000000000..a839ed897d7a9d4b238a8551b2255b87f207caee --- /dev/null +++ b/python/paddle/v2/fluid/layers/tensor.py @@ -0,0 +1,130 @@ +from ..layer_helper import LayerHelper + +__all__ = [ + 'create_tensor', 'cast', 'concat', 'sums', 'assign', + 'fill_constant_batch_size_like', 'fill_constant', 'ones', 'zeros' +] + + +def create_tensor(dtype, name=None, main_program=None, startup_program=None): + helper = LayerHelper("create_tensor", **locals()) + return helper.create_variable(name=helper.name, dtype=dtype) + + +def cast(x, dtype, main_program=None): + """ + This function takes in the input with input_dtype + and casts it to the output_dtype as the output. + """ + helper = LayerHelper('cast', **locals()) + out = helper.create_tmp_variable(dtype=dtype) + helper.append_op( + type='cast', + inputs={'X': [x]}, + outputs={'Out': [out]}, + attrs={'in_dtype': x.dtype, + 'out_dtype': out.dtype}) + return out + + +def concat(input, axis, main_program=None, startup_program=None): + """ + This function concats the input along the axis mentioned + and returns that as the output. + """ + helper = LayerHelper('concat', **locals()) + out = helper.create_tmp_variable(dtype=helper.input_dtype()) + helper.append_op( + type='concat', + inputs={'X': input}, + outputs={'Out': [out]}, + attrs={'axis': axis}) + return out + + +def sums(input, out=None, main_program=None, startup_program=None): + """ + This function takes in the input and performs the sum operation on it + and returns that as the output. + """ + helper = LayerHelper('sum', **locals()) + if out is None: + out = helper.create_tmp_variable(dtype=helper.input_dtype()) + helper.append_op(type='sum', inputs={'X': input}, outputs={'Out': out}) + return out + + +def assign(input, output, main_program=None, startup_program=None): + helper = LayerHelper('assign', **locals()) + helper.append_op( + type='scale', + inputs={'X': [input]}, + outputs={'Out': [output]}, + attrs={'scale': 1.0}) + return output + + +def fill_constant(shape, + dtype, + value, + out=None, + main_program=None, + startup_program=None): + """ + This function creates a tensor , with shape as mentioned in the input and + specified dtype and fills this up with a constant value that + comes in the input. It also sets the stop_gradient to be True. + """ + helper = LayerHelper("fill_constant", **locals()) + if out is None: + out = helper.create_tmp_variable(dtype=dtype) + helper.append_op( + type='fill_constant', + inputs={}, + outputs={'Out': [out]}, + attrs={'shape': shape, + 'dtype': out.dtype, + 'value': float(value)}) + out.stop_gradient = True + return out + + +def fill_constant_batch_size_like(input, + shape, + dtype, + value, + input_dim_idx=0, + output_dim_idx=0, + main_program=None, + startup_program=None): + helper = LayerHelper("fill_constant_batch_size_like", **locals()) + out = helper.create_tmp_variable(dtype=dtype) + helper.append_op( + type='fill_constant_batch_size_like', + inputs={'Input': input}, + outputs={'Out': [out]}, + attrs={ + 'shape': shape, + 'dtype': out.dtype, + 'value': float(value), + 'input_dim_idx': input_dim_idx, + 'output_dim_idx': output_dim_idx + }) + out.stop_gradient = True + return out + + +def ones(shape, dtype, main_program=None): + """ + This function performs the same function as fill_constant() declared above + with the constant value being 1.0. + """ + return fill_constant(value=1.0, **locals()) + + +def zeros(shape, dtype, main_program=None): + """ + This function performs the same function as fill_constant() declared above + with the constant value being 0.0. + """ + return fill_constant(value=0.0, **locals()) diff --git a/python/paddle/v2/fluid/tests/book/test_image_classification_train.py b/python/paddle/v2/fluid/tests/book/test_image_classification_train.py index 4e71b6f345ea7a1e6d29bc4ad810bc5b5f99d456..3d336ffe9582ddd9a2031e7aa7e2c26a772820f8 100644 --- a/python/paddle/v2/fluid/tests/book/test_image_classification_train.py +++ b/python/paddle/v2/fluid/tests/book/test_image_classification_train.py @@ -1,9 +1,9 @@ from __future__ import print_function -import numpy as np +import sys + import paddle.v2 as paddle import paddle.v2.fluid as fluid -import sys def resnet_cifar10(input, depth=32): diff --git a/python/paddle/v2/fluid/tests/book/test_label_semantic_roles.py b/python/paddle/v2/fluid/tests/book/test_label_semantic_roles.py index d2693b602ea5de9d2d60fbe114820b25119bfa3f..c3591a613acafb268a5bd70618cd4555450bac29 100644 --- a/python/paddle/v2/fluid/tests/book/test_label_semantic_roles.py +++ b/python/paddle/v2/fluid/tests/book/test_label_semantic_roles.py @@ -150,7 +150,7 @@ def main(): crf_decode = fluid.layers.crf_decoding( input=feature_out, param_attr=fluid.ParamAttr(name='crfw')) - precision, recall, f1_score = fluid.layers.chunk_eval( + chunk_evaluator = fluid.evaluator.ChunkEvaluator( input=crf_decode, label=target, chunk_scheme="IOB", @@ -176,20 +176,21 @@ def main(): batch_id = 0 for pass_id in xrange(PASS_NUM): + chunk_evaluator.reset(exe) for data in train_data(): - outs = exe.run(fluid.default_main_program(), - feed=feeder.feed(data), - fetch_list=[avg_cost, precision, recall, f1_score]) - avg_cost_val = np.array(outs[0]) - precision_val = np.array(outs[1]) - recall_val = np.array(outs[2]) - f1_score_val = np.array(outs[3]) + cost, precision, recall, f1_score = exe.run( + fluid.default_main_program(), + feed=feeder.feed(data), + fetch_list=[avg_cost] + chunk_evaluator.metrics) + pass_precision, pass_recall, pass_f1_score = chunk_evaluator.eval( + exe) if batch_id % 10 == 0: - print("avg_cost=" + str(avg_cost_val)) - print("precision_val=" + str(precision_val)) - print("recall_val:" + str(recall_val)) - print("f1_score_val:" + str(f1_score_val)) + print("avg_cost:" + str(cost) + " precision:" + str( + precision) + " recall:" + str(recall) + " f1_score:" + str( + f1_score) + " pass_precision:" + str( + pass_precision) + " pass_recall:" + str(pass_recall) + + " pass_f1_score:" + str(pass_f1_score)) # exit early for CI exit(0) diff --git a/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py b/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py index 80f859967979ec07536d652d4ea620fd4ddb2daa..c0b051f862f245b020a872b0a32fa4b560d1d574 100644 --- a/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py +++ b/python/paddle/v2/fluid/tests/book/test_understand_sentiment_lstm.py @@ -1,6 +1,51 @@ import numpy as np import paddle.v2 as paddle import paddle.v2.fluid as fluid +from paddle.v2.fluid.layer_helper import LayerHelper + + +def lstm(x, + c_pre_init, + hidden_dim, + forget_bias=None, + main_program=None, + startup_program=None): + """ + This function helps create an operator for the LSTM (Long Short Term + Memory) cell that can be used inside an RNN. + """ + helper = LayerHelper('lstm_unit', **locals()) + rnn = fluid.layers.StaticRNN() + with rnn.step(): + c_pre = rnn.memory(init=c_pre_init) + x_t = rnn.step_input(x) + + before_fc = fluid.layers.concat( + input=[x_t, c_pre], + axis=1, + main_program=main_program, + startup_program=startup_program) + after_fc = fluid.layers.fc(input=before_fc, + size=hidden_dim * 4, + main_program=main_program, + startup_program=startup_program) + + dtype = x.dtype + c = helper.create_tmp_variable(dtype) + h = helper.create_tmp_variable(dtype) + + helper.append_op( + type='lstm_unit', + inputs={"X": after_fc, + "C_prev": c_pre}, + outputs={"C": c, + "H": h}, + attrs={"forget_bias": forget_bias}) + + rnn.update_memory(c_pre, c) + rnn.output(h) + + return rnn() def lstm_net(dict_dim, class_dim=2, emb_dim=32, seq_len=80, batch_size=50): @@ -23,8 +68,7 @@ def lstm_net(dict_dim, class_dim=2, emb_dim=32, seq_len=80, batch_size=50): c_pre_init = fluid.layers.fill_constant( dtype=emb.dtype, shape=[batch_size, emb_dim], value=0.0) c_pre_init.stop_gradient = False - layer_1_out = fluid.layers.lstm( - emb, c_pre_init=c_pre_init, hidden_dim=emb_dim) + layer_1_out = lstm(emb, c_pre_init=c_pre_init, hidden_dim=emb_dim) layer_1_out = fluid.layers.transpose(x=layer_1_out, axis=[1, 0, 2]) prediction = fluid.layers.fc(input=layer_1_out, diff --git a/python/paddle/v2/fluid/tests/test_chunk_eval_op.py b/python/paddle/v2/fluid/tests/test_chunk_eval_op.py index 819e65a653437f0c34e14403f76317ff3b7f37f4..53bf6f815b8c7baf4c92d9fd488b69722ab0bef5 100644 --- a/python/paddle/v2/fluid/tests/test_chunk_eval_op.py +++ b/python/paddle/v2/fluid/tests/test_chunk_eval_op.py @@ -147,7 +147,13 @@ class TestChunkEvalOp(OpTest): 'Recall': np.asarray( [recall], dtype='float32'), 'F1-Score': np.asarray( - [f1], dtype='float32') + [f1], dtype='float32'), + 'NumInferChunks': np.asarray( + [self.num_infer_chunks], dtype='int64'), + 'NumLabelChunks': np.asarray( + [self.num_label_chunks], dtype='int64'), + 'NumCorrectChunks': np.asarray( + [self.num_correct_chunks], dtype='int64') } def setUp(self): diff --git a/python/paddle/v2/fluid/tests/test_reduce_op.py b/python/paddle/v2/fluid/tests/test_reduce_op.py index 70359d60cbe656150877673c63e81eae92d8ab9a..a021d4dd91bb9cc1e5d85411b3813b966ef5b296 100644 --- a/python/paddle/v2/fluid/tests/test_reduce_op.py +++ b/python/paddle/v2/fluid/tests/test_reduce_op.py @@ -85,5 +85,19 @@ class Test1DReduce(OpTest): self.check_grad(['X'], 'Out') +class TestReduceAll(OpTest): + def setUp(self): + self.op_type = "reduce_sum" + self.inputs = {'X': np.random.random((5, 6, 2, 10)).astype("float32")} + self.attrs = {'reduce_all': True} + self.outputs = {'Out': self.inputs['X'].sum()} + + def test_check_output(self): + self.check_output() + + def test_check_grad(self): + self.check_grad(['X'], 'Out') + + if __name__ == '__main__': unittest.main() diff --git a/python/paddle/v2/fluid/tests/test_reshape_op.py b/python/paddle/v2/fluid/tests/test_reshape_op.py index 16bb6bb2af67f7d32a2fafc1cb37412084ec0829..18ee3aece656276fec9671df9baf298b7fd3c9b1 100644 --- a/python/paddle/v2/fluid/tests/test_reshape_op.py +++ b/python/paddle/v2/fluid/tests/test_reshape_op.py @@ -17,5 +17,19 @@ class TestReshapeOp(OpTest): self.check_grad(["X"], "Out") +class TestReshapeOpDimInfer(OpTest): + def setUp(self): + self.op_type = "reshape" + self.inputs = {'X': np.random.random((10, 20)).astype("float32")} + self.attrs = {'shape': [4, -1, 5]} + self.outputs = {'Out': self.inputs['X'].reshape(self.attrs['shape'])} + + def test_check_output(self): + self.check_output() + + def test_check_grad(self): + self.check_grad(["X"], "Out") + + if __name__ == '__main__': unittest.main() diff --git a/python/paddle/v2/parameters.py b/python/paddle/v2/parameters.py index bd97dc1199fedc8ac91c1c6086957e8cce88bdc4..7b7d1a1d1672802e0e91a857100604758683224e 100644 --- a/python/paddle/v2/parameters.py +++ b/python/paddle/v2/parameters.py @@ -383,19 +383,22 @@ class Parameters(object): params.deserialize(param_name, f) return params - def init_from_tar(self, f): + def init_from_tar(self, f, exclude_params=[]): """ Different from `from_tar`, this interface can be used to init partial network parameters from another saved model. :param f: the initialized model file. :type f: tar file + :param exclude_params: the names of parameters that should + not be initialized from the model file. + :type exclude_params: list of strings :return: Nothing. """ tar_param = Parameters.from_tar(f) for pname in tar_param.names(): - if pname in self.names(): + if pname in self.names() and pname not in exclude_params: self.set(pname, tar_param.get(pname)) diff --git a/python/paddle/v2/reader/decorator.py b/python/paddle/v2/reader/decorator.py index 7e457f987d36e52e9e7c7727b4f996ad31c6bf08..27c82c95f79e0a3e3129627bfa33d85e0d3cd862 100644 --- a/python/paddle/v2/reader/decorator.py +++ b/python/paddle/v2/reader/decorator.py @@ -390,8 +390,6 @@ def pipe_reader(left_cmd, if not callable(parser): raise TypeError("parser must be a callable object") - process = subprocess.Popen( - left_cmd.split(" "), bufsize=bufsize, stdout=subprocess.PIPE) # TODO(typhoonzero): add a thread to read stderr # Always init a decompress object is better than @@ -400,6 +398,8 @@ def pipe_reader(left_cmd, 32 + zlib.MAX_WBITS) # offset 32 to skip the header def reader(): + process = subprocess.Popen( + left_cmd.split(" "), bufsize=bufsize, stdout=subprocess.PIPE) remained = "" while True: buff = process.stdout.read(bufsize) diff --git a/python/paddle/v2/reader/tests/decorator_test.py b/python/paddle/v2/reader/tests/decorator_test.py index 5a92951b100fa51ab6df7039d9c6b54d1f9d963e..06e14796daf27812b9aeb1e4b024f294c7609991 100644 --- a/python/paddle/v2/reader/tests/decorator_test.py +++ b/python/paddle/v2/reader/tests/decorator_test.py @@ -145,5 +145,35 @@ class TestXmap(unittest.TestCase): self.assertEqual(e, mapper(idx)) +class TestPipeReader(unittest.TestCase): + def test_pipe_reader(self): + def simple_parser(lines): + return lines + + import tempfile + + records = [str(i) for i in xrange(5)] + temp = tempfile.NamedTemporaryFile() + try: + with open(temp.name, 'w') as f: + for r in records: + f.write('%s\n' % r) + + cmd = "cat %s" % temp.name + reader = paddle.v2.reader.pipe_reader( + cmd, simple_parser, bufsize=128) + for i in xrange(4): + result = [] + for r in reader(): + result.append(r) + + for idx, e in enumerate(records): + print e, result[idx] + self.assertEqual(e, result[idx]) + finally: + # delete the temporary file + temp.close() + + if __name__ == '__main__': unittest.main() diff --git a/python/setup.py.in b/python/setup.py.in index 9ccb4dc1762ac761212347fa7c7c94b223d75e24..8396fb44cfcee28211b5d3db7684a4adce1fb1f6 100644 --- a/python/setup.py.in +++ b/python/setup.py.in @@ -68,6 +68,7 @@ packages=['paddle', 'paddle.v2.plot', 'paddle.v2.fluid', 'paddle.v2.fluid.proto', + 'paddle.v2.fluid.layers', 'py_paddle'] with open('@PADDLE_SOURCE_DIR@/python/requirements.txt') as f: