提交 441de2c6 编写于 作者: D dengkaipeng

Merge branch 'develop' of https://github.com/PaddlePaddle/FluidDoc into hapi-en

......@@ -115,15 +115,6 @@ clang-formater.......................................(no files to check)Skipped
create mode 100644 233
```
<b> <font color="red">需要注意的是:您需要在commit中添加说明(commit message)以触发CI单测,写法如下:</font> </b>
```bash
# 触发develop分支的CI单测
➜ git commit -m "test=develop"
# 触发release/1.1分支的CI单侧
➜ git commit -m "test=release/1.1"
```
## 保持本地仓库最新
......
......@@ -113,14 +113,6 @@ clang-formater.......................................(no files to check)Skipped
create mode 100644 233
```
<b> <font color="red">Attention needs to be paid:you need to add commit message to trigger CI test.The command is as follows:</font> </b>
```bash
# Touch CI single test of develop branch
➜ git commit -m "test=develop"
# Touch CI single test of release/1.1 branch
➜ git commit -m "test=release/1.1"
```
## Keep the latest local repository
......
......@@ -26,7 +26,7 @@
<div align="center">
<img src="https://github.com/PaddlePaddle/FluidDoc/blob/release/1.1/doc/fluid/advanced_usage/development/contribute_to_paddle/img/cla_unsigned.png?raw=true" height="330" width="400">
<img src="https://github.com/PaddlePaddle/FluidDoc/blob/release/1.1/doc/fluid/advanced_usage/development/contribute_to_paddle/img/cla_unsigned.png?raw=true" height="40" width="500">
</div>
......
......@@ -26,7 +26,7 @@ For the first time to submit Pull Request,you need to sign CLA(Contributor Licen
<div align="center">
<img src="https://github.com/PaddlePaddle/FluidDoc/blob/release/1.1/doc/fluid/advanced_usage/development/contribute_to_paddle/img/cla_unsigned.png?raw=true" height="330" width="400">
<img src="https://github.com/PaddlePaddle/FluidDoc/blob/release/1.1/doc/fluid/advanced_usage/development/contribute_to_paddle/img/cla_unsigned.png?raw=true" height="40" width="500">
</div>
......
# VisualDL 工具简介
......@@ -8,14 +7,26 @@
VisualDL是深度学习模型可视化分析工具,以丰富的图表呈现训练参数变化趋势、模型结构、数据样本、高维数据分布等。可帮助用户更清晰直观地理解深度学习模型训练过程及模型结构,进而实现高效的模型优化。
VisualDL是飞桨可视化分析工具,以丰富的图表呈现训练参数变化趋势、模型结构、数据样本、直方图、PR曲线及高维数据分布。可帮助用户更清晰直观地理解深度学习模型训练过程及模型结构,进而实现高效的模型优化。
具体功能使用方式请参见**VisualDL使用指南**。项目正处于高速迭代中,敬请期待新组件的加入。
VisualDL提供丰富的可视化功能,支持实时训练参数分析、图结构、数据样本可视化及高维数据降维呈现等诸多功能。具体功能使用方式,请参见 **VisualDL 使用指南**。项目正处于高速迭代中,敬请期待新组件的加入
VisualDL支持浏览器种类:Chrome(81和83)、Safari 13、FireFox(77和78)、Edge(Chromium版)
VisualDL原生支持python的使用, 通过在模型的Python配置中添加几行代码,便可为训练过程提供丰富的可视化支持。
## 目录
* [核心亮点](#核心亮点)
* [安装方式](#安装方式)
* [使用方式](#使用方式)
* [可视化功能概览](#可视化功能概览)
* [开源贡献](#开源贡献)
* [更多细节](#更多细节)
* [技术交流](#技术交流)
## 核心亮点
......@@ -26,7 +37,7 @@ API设计简洁易懂,使用简单。模型结构一键实现可视化。
### 功能丰富
功能覆盖训练参数、图结构、数据样本及数据降维可视化。
功能覆盖标量、数据样本、图结构、直方图、PR曲线及数据降维可视化。
### 高兼容性
......@@ -40,13 +51,23 @@ API设计简洁易懂,使用简单。模型结构一键实现可视化。
## 安装方式
使用pip安装 VisualDL 运行范例:
### 使用pip安装
```shell
pip install --upgrade visualdl==2.0.0a2
pip install --upgrade --pre visualdl
```
### 使用代码安装
```
git clone https://github.com/PaddlePaddle/VisualDL.git
cd VisualDL
python setup.py bdist_wheel
pip install --upgrade dist/visualdl-*.whl
```
需要注意,官方自2020年1月1日起不再维护Python2,为了保障代码可用性,VisualDL现仅支持Python3
## 使用方式
......@@ -57,15 +78,13 @@ VisualDL将训练过程中的数据、参数等信息储存至日志文件中后
VisualDL的后端提供了Python SDK,可通过LogWriter定制一个日志记录器,接口如下:
```python
class LogWriter(
logdir=None,
class LogWriter(logdir=None,
comment='',
max_queue=10,
flush_secs=120,
filename_suffix='',
write_to_disk=True,
**kwargs
)
**kwargs)
```
#### 接口参数
......@@ -103,16 +122,21 @@ with LogWriter(logdir="./log/scalar_test/train") as writer:
使用命令行启动VisualDL面板,命令格式如下:
```python
visualdl --logdir <dir_1, dir_2, ... , dir_n> --host <host> --port <port>
visualdl --logdir <dir_1, dir_2, ... , dir_n> --host <host> --port <port> --cache-timeout <cache_timeout> --language <language> --public-path <public_path> --api-only
```
参数详情:
| 参数 | 意义 |
| -------- | ------------------------------------------------------------ |
| --------------- | ------------------------------------------------------------ |
| --logdir | 设定日志所在目录,可以指定多个目录,VisualDL将遍历并且迭代寻找指定目录的子目录,将所有实验结果进行可视化 |
| --model | 设定模型文件路径(非文件夹路径),VisualDL将在此路径指定的模型文件进行可视化,目前可支持PaddlePaddle、ONNX、Keras、Core ML、Caffe等多种模型结构,详情可查看[graph支持模型种类]([https://github.com/PaddlePaddle/VisualDL/blob/develop/docs/components/README.md#Graph--%E7%BD%91%E7%BB%9C%E7%BB%93%E6%9E%84%E7%BB%84%E4%BB%B6](https://github.com/PaddlePaddle/VisualDL/blob/develop/docs/components/README.md#Graph--网络结构组件)) |
| --host | 设定IP,默认为`127.0.0.1` |
| --port | 设定端口,默认为`8040` |
| --cache-timeout | 后端缓存时间,在缓存时间内前端多次请求同一url,返回的数据从缓存中获取,默认为20秒 |
| --language | VisualDL面板语言,可指定为'EN'或'ZH',默认为浏览器使用语言 |
| --public-path | VisualDL面板URL路径,默认是'/app',即访问地址为'http://&lt;host&gt;:&lt;port&gt;/app' |
| --api-only | 是否只提供API,如果设置此参数,则VisualDL不提供页面展示,只提供API服务,此时API地址为'http://&lt;host&gt;:&lt;port&gt;/&lt;public_path&gt;/api';若没有设置public_path参数,则默认为'http://&lt;host&gt;:&lt;port&gt;/api' |
针对上一步生成的日志,启动命令为:
......@@ -130,19 +154,26 @@ visualdl.server.app.run(logdir,
port=8080,
cache_timeout=20,
language=None,
public_path=None,
api_only=False,
open_browser=False)
```
接口参数:
请注意:除`logdir`外,其他参数均为不定参数,传递时请指明参数名。
接口参数具体如下:
| 参数 | 格式 | 含义 |
| ------------- | ------------------------------------------------ | ------------------------------------------------------------ |
| logdir | string或list[string_1, string_2, ... , string_n] | 日志文件所在的路径,VisualDL将在此路径下递归搜索日志文件并进行可视化,可指定单个或多个路径 |
| model | string | 模型文件路径(非文件夹路径),VisualDL将在此路径指定的模型文件进行可视化 |
| host | string | 指定启动服务的ip,默认为`127.0.0.1` |
| port | int | 启动服务端口,默认为`8040` |
| cache_timeout | int | 后端缓存时间,在缓存时间内前端多次请求同一url,返回的数据从缓存中获取,默认为20秒 |
| language | string | VisualDL面板语言,可指定为'EN'或'CN',默认自动匹配操作系统使用语言 |
| open_browser | boolean | 是否打开浏览器,设置为True则在启动后自动打开浏览器并访问VisualDL面板 |
| language | string | VisualDL面板语言,可指定为'en'或'zh',默认为浏览器使用语言 |
| public_path | string | VisualDL面板URL路径,默认是'/app',即访问地址为'http://<host>:<port>/app' |
| api_only | boolean | 是否只提供API,如果设置此参数,则VisualDL不提供页面展示,只提供API服务,此时API地址为'http://<host>:<port>/<public_path>/api';若没有设置public_path参数,则默认为http://<host>:<port>/api' |
| open_browser | boolean | 是否打开浏览器,设置为True则在启动后自动打开浏览器并访问VisualDL面板,若设置api_only,则忽略此参数 |
针对上一步生成的日志,我们的启动脚本为:
......@@ -155,7 +186,7 @@ app.run(logdir="./log")
在使用任意一种方式启动VisualDL面板后,打开浏览器访问VisualDL面板,即可查看日志的可视化结果,如图:
<p align="center">
<img src="http://visualdl.bj.bcebos.com/images/3points_demo.png" width="60%"/>
<img src="https://user-images.githubusercontent.com/48054808/82786044-67ae9880-9e96-11ea-8a2b-3a0951a6ec19.png" width="60%"/>
</p>
......@@ -163,27 +194,31 @@ app.run(logdir="./log")
## 可视化功能概览
### Scalar
以图表形式实时展示训练过程参数,如loss、accuracy。让用户通过观察单组或多组训练参数变化,了解训练过程,加速模型调优。具有两大特点:
#### 动态展示
在启动VisualDL Board后,LogReader将不断增量的读取日志中数据并供前端调用展示,因此能够在训练中同步观测指标变化,如下图:
在启动VisualDL后,LogReader将不断增量的读取日志中数据并供前端调用展示,因此能够在训练中同步观测指标变化,如下图:
<p align="center">
<img src="http://visualdl.bj.bcebos.com/images/dynamic_display.gif" width="60%"/>
</p>
#### 多实验对比
只需在启动VisualDL Board的时将每个实验日志所在路径同时传入即可,每个实验中相同tag的指标将绘制在一张图中同步呈现,如下图:
只需在启动VisualDL时将每个实验日志所在路径同时传入即可,每个实验中相同tag的指标将绘制在一张图中同步呈现,如下图:
<p align="center">
<img src="http://visualdl.bj.bcebos.com/images/multi_experiments.gif" width="100%"/>
</p>
### Image
实时展示训练过程中的图像数据,用于观察不同训练阶段的图像变化,进而深入了解训练过程及效果。
<p align="center">
......@@ -191,6 +226,56 @@ app.run(logdir="./log")
</p>
### Audio
实时查看训练过程中的音频数据,监控语音识别与合成等任务的训练过程。
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/89017647-38605000-d34d-11ea-9d75-7d10b9854c36.gif" width="100%"/>
</p>
### Graph
一键可视化模型的网络结构。可查看模型属性、节点信息、节点输入输出等,并支持节点搜索,辅助用户快速分析模型结构与了解数据流向。
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84483052-5acdd980-accb-11ea-8519-1608da7ee698.png" width="100%"/>
</p>
### Histogram
以直方图形式展示Tensor(weight、bias、gradient等)数据在训练过程中的变化趋势。深入了解模型各层效果,帮助开发者精准调整模型结构。
- Offset模式
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86551031-86647c80-bf76-11ea-8ec2-8c86826c8137.png" width="100%"/>
</p>
- Overlay模式
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86551033-882e4000-bf76-11ea-8e6a-af954c662ced.png" width="100%"/>
</p>
### PR Curve
精度-召回率曲线,帮助开发者权衡模型精度和召回率之间的平衡,设定最佳阈值。
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86738774-ee46c000-c067-11ea-90d2-a98aac445cca.png" width="100%"/>
</p>
### High Dimensional
将高维数据进行降维展示,目前支持T-SNE、PCA两种降维方式,用于深入分析高维数据间的关系,方便用户根据数据特征进行算法优化。
......@@ -201,9 +286,15 @@ app.run(logdir="./log")
## 开源贡献
VisualDL 是由 [PaddlePaddle](http://www.paddlepaddle.org/)[ECharts](http://echarts.baidu.com/) 合作推出的开源项目。欢迎所有人使用,提意见以及贡献代码。
VisualDL 是由 [PaddlePaddle](https://www.paddlepaddle.org/)[ECharts](https://echarts.apache.org/) 合作推出的开源项目。
Graph 相关功能由 [Netron](https://github.com/lutzroeder/netron) 提供技术支持。
欢迎所有人使用,提意见以及贡献代码。
## 更多细节
想了解更多关于VisualDL可视化功能的使用详情介绍,请查看**Visual DL 使用指南**
想了解更多关于VisualDL可视化功能的使用详情介绍,请查看**VisualDL使用指南**
## 技术交流
欢迎您加入VisualDL官方QQ群:1045783368 与飞桨团队以及其他用户共同针对VisualDL进行讨论与交流。
# VisualDL 使用指南
### 概述
VisualDL 是一个面向深度学习任务设计的可视化工具。VisualDL 利用了丰富的图表来展示数据,用户可以更直观、清晰地查看数据的特征与变化趋势,有助于分析数据、及时发现错误,进而改进神经网络模型的设计。
目前,VisualDL 支持 scalar, image, high dimensional 三个组件,项目正处于高速迭代中,敬请期待新组件的加入。
目前,VisualDL 支持 scalar, image, audio, graph, histogram, pr curve, high dimensional 七个组件,项目正处于高速迭代中,敬请期待新组件的加入。
| 组件名称 | 展示图表 | 作用 |
| :----------------------------------------------------------: | :--------: | :----------------------------------------------------------- |
| <a href="#1">[ Scalar](#Scalar -- 折线图组件)</a> | 折线图 | 动态展示损失函数值、准确率等标量数据 |
| <a href="#3">[Image](#Image -- 图片可视化组件)</a> | 图片可视化 | 显示图片,可显示输入图片和处理后的结果,便于查看中间过程的变化 |
| <a href="#6">[High Dimensional](#High Dimensional -- 数据降维组件)</a> | 数据降维 | 将高维数据映射到 2D/3D 空间来可视化嵌入,便于观察不同数据的相关性 |
| :-------------------------------------------------: | :--------: | :----------------------------------------------------------- |
| [ Scalar](#Scalar--标量组件) | 折线图 | 动态展示损失函数值、准确率等标量数据 |
| [Image](#Image--图片可视化组件) | 图片可视化 | 显示图片,可显示输入图片和处理后的结果,便于查看中间过程的变化 |
| [Audio](#Audio--音频播放组件) | 音频播放 | 播放训练过程中的音频数据,监控语音识别与合成等任务的训练过程 |
| [Graph](#Graph--网络结构组件) | 网络结构 | 展示网络结构、节点属性及数据流向,辅助学习、优化网络结构 |
| [Histogram](#Histogram--直方图组件) | 直方图 | 展示训练过程中权重、梯度等张量的分布 |
| [PR Curve](#PR-Curve--PR曲线组件) | 折线图 | 权衡精度与召回率之间的平衡关系,便于选择最佳阈值 |
| [High Dimensional](#High-Dimensional--数据降维组件) | 数据降维 | 将高维数据映射到 2D/3D 空间来可视化嵌入,便于观察不同数据的相关性 |
## Scalar -- 折线图组件
......@@ -29,16 +29,22 @@ Scalar 组件的记录接口如下:
```python
add_scalar(tag, value, step, walltime=None)
```
接口参数说明如下:
|参数|格式|含义|
|-|-|-|
|tag|string|记录指标的标志,如`train/loss`,不能含有`%`|
|value|float|要记录的数据值|
|step|int|记录的步数|
|walltime|int|记录数据的时间戳,默认为当前时间戳|
| 参数 | 格式 | 含义 |
| -------- | ------ | ------------------------------------------- |
| tag | string | 记录指标的标志,如`train/loss`,不能含有`%` |
| value | float | 要记录的数据值 |
| step | int | 记录的步数 |
| walltime | int | 记录数据的时间戳,默认为当前时间戳 |
### Demo
下面展示了使用 Scalar 组件记录数据的示例,代码见[Scalar组件](../../demo/components/scalar_test.py)
- 基础使用
下面展示了使用 Scalar 组件记录数据的示例,代码文件请见[Scalar组件](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/components/scalar_test.py)
```python
from visualdl import LogWriter
......@@ -52,7 +58,9 @@ if __name__ == '__main__':
# 向记录器添加一个tag为`loss`的数据
writer.add_scalar(tag="loss", step=step, value=1/(value[step] + 1))
```
运行上述程序后,在命令行执行
```shell
visualdl --logdir ./log --port 8080
```
......@@ -60,11 +68,58 @@ visualdl --logdir ./log --port 8080
接着在浏览器打开`http://127.0.0.1:8080`,即可查看以下折线图。
<p align="center">
<img src="http://visualdl.bj.bcebos.com/images/scalar-globalstatic.png" width="100%"/>
<img src="https://user-images.githubusercontent.com/48054808/82397559-478c6d00-9a83-11ea-80db-a0844dcaca35.png" width="100%"/>
</p>
- 多组实验对比
下面展示了使用Scalar组件实现多组实验对比
多组实验对比的实现分为两步:
1. 创建子日志文件储存每组实验的参数数据
2. 将数据写入scalar组件时,**使用相同的tag**,即可实现对比**不同实验****同一类型参数**
```python
from visualdl import LogWriter
if __name__ == '__main__':
value = [i/1000.0 for i in range(1000)]
# 步骤一:创建父文件夹:log与子文件夹:scalar_test
with LogWriter(logdir="./log/scalar_test") as writer:
for step in range(1000):
# 步骤二:向记录器添加一个tag为`train/acc`的数据
writer.add_scalar(tag="train/acc", step=step, value=value[step])
# 步骤二:向记录器添加一个tag为`train/loss`的数据
writer.add_scalar(tag="train/loss", step=step, value=1/(value[step] + 1))
# 步骤一:创建第二个子文件夹scalar_test2
value = [i/500.0 for i in range(1000)]
with LogWriter(logdir="./log/scalar_test2") as writer:
for step in range(1000):
# 步骤二:在同样名为`train/acc`下添加scalar_test2的accuracy的数据
writer.add_scalar(tag="train/acc", step=step, value=value[step])
# 步骤二:在同样名为`train/loss`下添加scalar_test2的loss的数据
writer.add_scalar(tag="train/loss", step=step, value=1/(value[step] + 1))
```
运行上述程序后,在命令行执行
```shell
visualdl --logdir ./log --port 8080
```
接着在浏览器打开`http://127.0.0.1:8080`,即可查看以下折线图,对比「scalar_test」和「scalar_test2」的Accuracy和Loss。
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84644158-5efb3080-af31-11ea-8e64-bbe4078425f4.png" width="100%"/>
</p>
*多组实验对比的应用案例可参考AI Studio项目:[VisualDL 2.0--眼疾识别训练可视化](https://aistudio.baidu.com/aistudio/projectdetail/502834)
### 功能操作说明
* 支持数据卡片「最大化」、「还原」、「坐标系转化」(y轴对数坐标)、「下载」折线图
......@@ -75,6 +130,8 @@ visualdl --logdir ./log --port 8080
* 数据点Hover展示详细信息
<p align="center">
......@@ -83,6 +140,8 @@ visualdl --logdir ./log --port 8080
* 可搜索卡片标签,展示目标图像
<p align="center">
......@@ -91,6 +150,8 @@ visualdl --logdir ./log --port 8080
* 可搜索打点数据标签,展示特定数据
<p align="center">
......@@ -98,6 +159,8 @@ visualdl --logdir ./log --port 8080
</p>
* X轴有三种衡量尺度
1. Step:迭代次数
......@@ -107,6 +170,8 @@ visualdl --logdir ./log --port 8080
<p align="center">
<img src="http://visualdl.bj.bcebos.com/images/x-axis.png" width="40%"/>
</p>
* 可调整曲线平滑度,以便更好的展现参数整体的变化趋势
<p align="center">
......@@ -114,6 +179,8 @@ visualdl --logdir ./log --port 8080
</p>
## Image -- 图片可视化组件
### 介绍
......@@ -127,16 +194,20 @@ Image 组件的记录接口如下:
```python
add_image(tag, img, step, walltime=None)
```
接口参数说明如下:
|参数|格式|含义|
|-|-|-|
|tag|string|记录指标的标志,如`train/loss`,不能含有`%`|
|img|numpy.ndarray|以ndarray格式表示的图片|
|step|int|记录的步数|
|walltime|int|记录数据的时间戳,默认为当前时间戳|
| 参数 | 格式 | 含义 |
| -------- | ------------- | ------------------------------------------- |
| tag | string | 记录指标的标志,如`train/loss`,不能含有`%` |
| img | numpy.ndarray | 以ndarray格式表示的图片 |
| step | int | 记录的步数 |
| walltime | int | 记录数据的时间戳,默认为当前时间戳 |
### Demo
下面展示了使用 Image 组件记录数据的示例,代码文件请见[Image组件](../../demo/components/image_test.py)
下面展示了使用 Image 组件记录数据的示例,代码文件请见[Image组件](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/components/image_test.py)
```python
import numpy as np
from PIL import Image
......@@ -159,11 +230,13 @@ if __name__ == '__main__':
with LogWriter(logdir="./log/image_test/train") as writer:
for step in range(6):
# 添加一个图片数据
writer.add_image(tag="doge",
writer.add_image(tag="eye",
img=random_crop("../../docs/images/eye.jpg"),
step=step)
```
运行上述程序后,在命令行执行
```shell
visualdl --logdir ./log --port 8080
```
......@@ -171,10 +244,12 @@ visualdl --logdir ./log --port 8080
在浏览器输入`http://127.0.0.1:8080`,即可查看图片数据。
<p align="center">
<img src="http://visualdl.bj.bcebos.com/images/image-static.png" width="90%"/>
<img src="http://visualdl.bj.bcebos.com/images/image-static.png" width="100%"/>
</p>
### 功能操作说明
可搜索图片标签显示对应图片数据
......@@ -184,6 +259,8 @@ visualdl --logdir ./log --port 8080
</p>
支持滑动Step/迭代次数查看不同迭代次数下的图片数据
<p align="center">
......@@ -191,6 +268,442 @@ visualdl --logdir ./log --port 8080
</p>
## Audio--音频播放组件
### 介绍
Audio组件实时查看训练过程中的音频数据,监控语音识别与合成等任务的训练过程。
### 记录接口
Audio 组件的记录接口如下:
```python
add_audio(tag, audio_array, step, sample_rate)
```
接口参数说明如下:
| 参数 | 格式 | 含义 |
| ----------- | ------------- | ------------------------------------------ |
| tag | string | 记录指标的标志,如`audio_tag`,不能含有`%` |
| audio_arry | numpy.ndarray | 以ndarray格式表示的音频 |
| step | int | 记录的步数 |
| sample_rate | int | 采样率,**注意正确填写对应音频的原采样率** |
### Demo
下面展示了使用 Audio 组件记录数据的示例,代码文件请见[Audio组件](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/components/audio_test.py)
```python
from visualdl import LogWriter
import numpy as np
import wave
def read_audio_data(audio_path):
"""
Get audio data.
"""
CHUNK = 4096
f = wave.open(audio_path, "rb")
wavdata = []
chunk = f.readframes(CHUNK)
while chunk:
data = np.frombuffer(chunk, dtype='uint8')
wavdata.extend(data)
chunk = f.readframes(CHUNK)
# 8k sample rate, 16bit frame, 1 channel
shape = [8000, 2, 1]
return shape, wavdata
if __name__ == '__main__':
with LogWriter(logdir="./log") as writer:
audio_shape, audio_data = read_audio_data("./testing.wav")
audio_data = np.array(audio_data)
writer.add_audio(tag="audio_tag",
audio_array=audio_data,
step=0,
sample_rate=8000)
```
运行上述程序后,在命令行执行
```shell
visualdl --logdir ./log --port 8080
```
在浏览器输入`http://127.0.0.1:8080`,即可查看音频数据。
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/87659138-b4746880-c78f-11ea-965b-c33804e7c296.png" width="100%"/>
</p>
### 功能操作说明
- 可搜索音频标签显示对应音频数据
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/87661431-29956d00-c793-11ea-833b-172d8fc1b221.png" width="100%"/>
</p>
- 支持滑动Step/迭代次数试听不同迭代次数下的音频数据
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/87661089-a07e3600-c792-11ea-8740-cbe99a64d830.png" width="60%"/>
</p>
- 支持播放/暂停音频数据
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/87661130-b3910600-c792-11ea-9f9f-2ae66132e9de.png" width="60%"/>
</p>
- 支持音量调节
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/87661497-49c52c00-c793-11ea-9eeb-471543cd2a0b.png" width="60%"/>
</p>
- 支持音频下载
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/87661166-c277b880-c792-11ea-8ad7-5c60bb08379b.png" width="60%"/>
</p>
## Graph--网络结构组件
### 介绍
Graph组件一键可视化模型的网络结构。用于查看模型属性、节点信息、节点输入输出等,并进行节点搜索,协助开发者们快速分析模型结构与了解数据流向。
### Demo
共有两种启动方式:
- 前端模型文件拖拽上传:
- 如只需使用Graph组件,则无需添加任何参数,在命令行执行`visualdl`后即可启动面板进行上传。
- 如果同时需使用其他功能,在命令行指定日志文件路径(以`./log`为例)即可启动面板进行上传:
```shell
visualdl --logdir ./log --port 8080
```
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487396-44c31780-acd1-11ea-831a-1632e636613d.png" width="80%"/>
</p>
- 后端启动Graph:
- 在命令行加入参数`--model`并指定**模型文件**路径(非文件夹路径),即可启动并查看网络结构可视化:
```shell
visualdl --model ./log/model --port 8080
```
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84490149-51e20580-acd5-11ea-9663-1f156892c0e0.png" width="100%"/>
</p>
### 功能操作说明
- 一键上传模型
- 支持模型格式:PaddlePaddle、ONNX、Keras、Core ML、Caffe、Caffe2、Darknet、MXNet、ncnn、TensorFlow Lite
- 实验性支持模型格式:TorchScript、PyTorch、Torch、 ArmNN、BigDL、Chainer、CNTK、Deeplearning4j、MediaPipe、ML.NET、MNN、OpenVINO、Scikit-learn、Tengine、TensorFlow.js、TensorFlow
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487396-44c31780-acd1-11ea-831a-1632e636613d.png" width="80%"/>
</p>
- 支持上下左右任意拖拽模型、放大和缩小模型
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/89163601-6ab9b980-d5a8-11ea-9c6d-2dc5eaed0d41.gif" width="100%"/>
</p>
- 搜索定位到对应节点
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487694-b9965180-acd1-11ea-8214-34f3febc1828.png" width="30%"/>
</p>
- 点击查看模型属性
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487751-cadf5e00-acd1-11ea-9ce2-4fdfeeea9c5a.png" width="30%"/>
</p>
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487759-d03ca880-acd1-11ea-9294-520ef7f9e0b1.png" width="30%"/>
</p>
- 支持选择模型展示的信息
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487829-ee0a0d80-acd1-11ea-8563-6682a15483d9.png" width="23%"/>
</p>
- 支持以PNG、SVG格式导出模型结构图
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487884-ff531a00-acd1-11ea-8b12-5221db78683e.png" width="30%"/>
</p>
- 点击节点即可展示对应属性信息
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487941-13971700-acd2-11ea-937d-42fb524b9ee1.png" width="30%"/>
</p>
- 支持一键更换模型
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/84487998-27db1400-acd2-11ea-83d7-5d75832ef41d.png" width="25%"/>
</p>
## Histogram--直方图组件
### 介绍
Histogram组件以直方图形式展示Tensor(weight、bias、gradient等)数据在训练过程中的变化趋势。深入了解模型各层效果,帮助开发者精准调整模型结构。
### 记录接口
Histogram 组件的记录接口如下:
```python
add_histogram(tag, values, step, walltime=None, buckets=10)
```
接口参数说明如下:
| 参数 | 格式 | 含义 |
| -------- | --------------------- | ------------------------------------------- |
| tag | string | 记录指标的标志,如`train/loss`,不能含有`%` |
| values | numpy.ndarray or list | 以ndarray或list格式表示的数据 |
| step | int | 记录的步数 |
| walltime | int | 记录数据的时间戳,默认为当前时间戳 |
| buckets | int | 生成直方图的分段数,默认为10 |
### Demo
下面展示了使用 Histogram组件记录数据的示例,代码文件请见[Histogram组件](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/components/histogram_test.py)
```python
from visualdl import LogWriter
import numpy as np
if __name__ == '__main__':
values = np.arange(0, 1000)
with LogWriter(logdir="./log/histogram_test/train") as writer:
for index in range(1, 101):
interval_start = 1 + 2 * index / 100.0
interval_end = 6 - 2 * index / 100.0
data = np.random.uniform(interval_start, interval_end, size=(10000))
writer.add_histogram(tag='default tag',
values=data,
step=index,
buckets=10)
```
运行上述程序后,在命令行执行
```shell
visualdl --logdir ./log --port 8080
```
在浏览器输入`http://127.0.0.1:8080`,即可查看训练参数直方图。
### 功能操作说明
- 支持数据卡片「最大化」、直方图「下载」
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86535351-42d82700-bf12-11ea-89f0-171280e7c526.png" width="60%"/>
</p>
- 可选择Offset或Overlay模式
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86535413-c134c900-bf12-11ea-9ad6-f0ad8eafa76f.png" width="30%"/>
</p>
- Offset模式
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86536435-2b9d3780-bf1a-11ea-9981-92f837d22ae5.png" width="60%"/>
</p>
- Overlay模式
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86536458-5ab3a900-bf1a-11ea-985e-05f06c1b762b.png" width="60%"/>
</p>
- 数据点Hover展示参数值、训练步数、频次
- 在第240次训练步数时,权重为-0.0031,且出现的频次是2734次
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86536482-80d94900-bf1a-11ea-9e12-5bea9f382b34.png" width="60%"/>
</p>
- 可搜索卡片标签,展示目标直方图
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86536503-baaa4f80-bf1a-11ea-80ab-cd988617d018.png" width="30%"/>
</p>
- 可搜索打点数据标签,展示特定数据流
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86536639-b894c080-bf1b-11ea-9ee5-cf815dd4bbd7.png" width="30%"/>
</p>
## PR Curve--PR曲线组件
### 介绍
PR Curve以折线图形式呈现精度与召回率的权衡分析,清晰直观了解模型训练效果,便于分析模型是否达到理想标准。
### 记录接口
PR Curve组件的记录接口如下:
```python
add_pr_curve(tag, labels, predictions, step=None, num_thresholds=10)
```
接口参数说明如下:
| 参数 | 格式 | 含义 |
| -------------- | --------------------- | ------------------------------------------- |
| tag | string | 记录指标的标志,如`train/loss`,不能含有`%` |
| labels | numpy.ndarray or list | 以ndarray或list格式表示的实际类别 |
| predictions | numpy.ndarray or list | 以ndarray或list格式表示的预测类别 |
| step | int | 记录的步数 |
| num_thresholds | int | 阈值设置的个数,默认为10,最大值为127 |
### Demo
下面展示了使用 PR Curve 组件记录数据的示例,代码文件请见[PR Curve组件](#https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/components/pr_curve_test.py)
```python
from visualdl import LogWriter
import numpy as np
with LogWriter("./log/pr_curve_test/train") as writer:
for step in range(3):
labels = np.random.randint(2, size=100)
predictions = np.random.rand(100)
writer.add_pr_curve(tag='pr_curve',
labels=labels,
predictions=predictions,
step=step,
num_thresholds=5)
```
运行上述程序后,在命令行执行
```shell
visualdl --logdir ./log --port 8080
```
接着在浏览器打开`http://127.0.0.1:8080`,即可查看PR Curve
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86738774-ee46c000-c067-11ea-90d2-a98aac445cca.png" width="100%"/>
</p>
### 功能操作说明
- 支持数据卡片「最大化」,「还原」、「下载」PR曲线
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86740067-f18e7b80-c068-11ea-96bf-52cb7da1f799.png" width="60%"/>
</p>
- 数据点Hover展示详细信息:阈值对应的TP、TN、FP、FN
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86740477-43370600-c069-11ea-93f0-f4d05445fbab.png" width="70%"/>
</p>
- 可搜索卡片标签,展示目标图表
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86740670-66fa4c00-c069-11ea-9ee3-0a22e2d0dbec.png" width="50%"/>
</p>
- 可搜索打点数据标签,展示特定数据
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86740817-809b9380-c069-11ea-9453-6531e3ff5f43.png" width="50%"/>
</p>
- 支持查看不同训练步数下的PR曲线
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86741057-b04a9b80-c069-11ea-9fef-2dcc16f9cd46.png" width="50%"/>
</p>
- X轴-时间显示类型有三种衡量尺度
- Step:迭代次数
- Walltime:训练绝对时间
- Relative:训练时长
<p align="center">
<img src="https://user-images.githubusercontent.com/48054808/86741304-db34ef80-c069-11ea-86eb-787b49ed3705.png" width="50%"/>
</p>
## High Dimensional -- 数据降维组件
### 介绍
......@@ -207,16 +720,20 @@ High Dimensional 组件的记录接口如下:
```python
add_embeddings(tag, labels, hot_vectors, walltime=None)
```
接口参数说明如下:
|参数|格式|含义|
|-|-|-|
|tag|string|记录指标的标志,如`default`,不能含有`%`|
|labels|numpy.array 或 list|一维数组表示的标签,每个元素是一个string类型的字符串|
|hot_vectors|numpy.array or list|与labels一一对应,每个元素可以看作是某个标签的特征|
|walltime|int|记录数据的时间戳,默认为当前时间戳|
| 参数 | 格式 | 含义 |
| ----------- | ------------------- | ---------------------------------------------------- |
| tag | string | 记录指标的标志,如`default`,不能含有`%` |
| labels | numpy.array 或 list | 一维数组表示的标签,每个元素是一个string类型的字符串 |
| hot_vectors | numpy.array or list | 与labels一一对应,每个元素可以看作是某个标签的特征 |
| walltime | int | 记录数据的时间戳,默认为当前时间戳 |
### Demo
下面展示了使用 High Dimensional 组件记录数据的示例,代码见[High Dimensional组件](../../demo/components/high_dimensional_test.py)
下面展示了使用 High Dimensional 组件记录数据的示例,代码文件请见[High Dimensional组件](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/components/high_dimensional_test.py)
```python
from visualdl import LogWriter
......@@ -237,7 +754,9 @@ if __name__ == '__main__':
labels=labels,
hot_vectors=hot_vectors)
```
运行上述程序后,在命令行执行
```shell
visualdl --logdir ./log --port 8080
```
......@@ -245,5 +764,11 @@ visualdl --logdir ./log --port 8080
接着在浏览器打开`http://127.0.0.1:8080`,即可查看降维后的可视化数据。
<p align="center">
<img src="http://visualdl.bj.bcebos.com/images/dynamic_high_dimensional.gif" width="80%"/>
<img src="http://visualdl.bj.bcebos.com/images/dynamic_high_dimensional.gif" width="100%"/>
</p>
#
......@@ -2,11 +2,22 @@
环境变量FLAGS
==================
调用说明
----------
PaddlePaddle中的环境变量FLAGS支持两种设置方式。
- 通过export来设置环境变量,如 :code:`export FLAGS_eager_delete_tensor_gb = 1.0` 。
- 通过API::code:`get_flag` 和 :code:`set_flags` 来打印和设置环境变量FLAGS。API使用详情请参考 :ref:`cn_api_fluid_get_flags` 与 :ref:`cn_api_fluid_set_flags` 。
环境变量FLAGS功能分类
----------------------
.. toctree::
:maxdepth: 1
cudnn_cn.rst
data_cn.rst
debug_cn.rst
......
......@@ -2,6 +2,17 @@
FLAGS
==================
Usage
------
These FLAGS in PaddlePaddle can be set in two ways.
- Set the FLAGS through export. For example: :code:`export FLAGS_eager_delete_tensor_gb = 1.0` .
- Through :code:`get_flags` and :code:`set_flags` to print and set the environment variables. For more information of using these API, please refer to :ref:`api_fluid_get_flags` and :ref:`api_fluid_get_flags` .
FLAGS Quick Search
------------------
.. toctree::
:maxdepth: 1
......
......@@ -11,13 +11,14 @@ FLAGS_allocator_strategy
取值范围
---------------
String型,['naive_best_fit', 'auto_growth']中的一个。缺省值为'naive_best_fit'。
String型,['naive_best_fit', 'auto_growth']中的一个。缺省值如果编译Paddle CMake时使用-DON_INFER=ON为'naive_best_fit'。
其他默认情况为'auto_growth'。PaddlePaddle pip安装包的默认策略也是'auto_growth'
示例
--------
FLAGS_allocator_strategy=naive_best_fit - 使用预分配best fit分配器。
FLAGS_allocator_strategy=naive_best_fit - 使用预分配best fit分配器,PaddlePaddle会先占用大多比例的可用内存/显存,在Paddle具体数据使用时分配,这种方式预占空间较大,但内存/显存碎片较少(比如能够支持模型的最大batch size会变大)
FLAGS_allocator_strategy=auto_growth - 使用auto growth分配器。
FLAGS_allocator_strategy=auto_growth - 使用auto growth分配器。PaddlePaddle会随着真实数据需要再占用内存/显存,但内存/显存可能会产生碎片(比如能够支持模型的最大batch size会变小)。
FLAGS_eager_delete_scope
......
......@@ -11,13 +11,13 @@ Use to choose allocator strategy of PaddlePaddle.
Values accepted
---------------
String, enum in ['naive_best_fit', 'auto_growth']. The default value is 'naive_best_fit'.
String, enum in ['naive_best_fit', 'auto_growth']. The default value will be 'naive_best_fit' if users compile PaddlePaddle with -DON_INFER=ON CMake flag, otherwise is 'auto_growth'. The default PaddlePaddle pip package uses 'auto_growth'.
Example
--------
FLAGS_allocator_strategy=naive_best_fit would use the pre-allocated best fit allocator.
FLAGS_allocator_strategy=naive_best_fit would use the pre-allocated best fit allocator. 'naive_best_fit' strategy would occupy almost all GPU memory by default but leads to less memory fragmentation (i.e., maximum batch size of models may be larger).
FLAGS_allocator_strategy=auto_growth would use the auto growth allocator.
FLAGS_allocator_strategy=auto_growth would use the auto growth allocator. 'auto_growth' strategy would allocate GPU memory on demand but may lead to more memory fragmentation (i.e., maximum batch size of models may be smaller).
......
......@@ -11,4 +11,4 @@
:hidden:
inference_deployment/index_cn.rst
flags/flags_cn.rst
......@@ -16,5 +16,5 @@ So far you have already been familiar with PaddlePaddle. And the next expectatio
:hidden:
inference_deployment/index_en.rst
flags/flags_en.rst
......@@ -7,15 +7,15 @@
-------------
.. csv-table::
:header: "版本说明", "预测库(1.8.1版本)", "预测库(develop版本)"
:header: "版本说明", "预测库(1.8.3版本)", "预测库(develop版本)"
:widths: 3, 2, 2
"ubuntu14.04_cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.1_cudnn7.6_avx_mkl_trt6", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7.6-avx-mkl-trt6%2Ffluid_inference.tgz>`_",
"ubuntu14.04_cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.1_cudnn7.6_avx_mkl_trt6", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-gpu-cuda10.1-cudnn7.6-avx-mkl-trt6%2Ffluid_inference.tgz>`_",
"nv-jetson-cuda10-cudnn7.5-trt5", "`fluid_inference.tar.gz <https://paddle-inference-lib.bj.bcebos.com/1.7.1-nv-jetson-cuda10-cudnn7.5-trt5/fluid_inference.tar.gz>`_",
......
......@@ -7,15 +7,15 @@ Direct Download and Installation
---------------------------------
.. csv-table:: c++ inference library list
:header: "version description", "inference library(1.8.1 version)", "inference library(develop version)"
:header: "version description", "inference library(1.8.3 version)", "inference library(develop version)"
:widths: 3, 2, 2
"ubuntu14.04_cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.1_cudnn7.6_avx_mkl_trt6", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7.6-avx-mkl-trt6%2Ffluid_inference.tgz>`_",
"ubuntu14.04_cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.1_cudnn7.6_avx_mkl_trt6", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.8.3-gpu-cuda10.1-cudnn7.6-avx-mkl-trt6%2Ffluid_inference.tgz>`_",
"nv-jetson-cuda10-cudnn7.5-trt5", "`fluid_inference.tar.gz <https://paddle-inference-lib.bj.bcebos.com/1.7.1-nv-jetson-cuda10-cudnn7.5-trt5/fluid_inference.tar.gz>`_",
Build from Source Code
......
......@@ -5,13 +5,13 @@
下载安装包与对应的测试环境
-------------
| 版本说明 | 预测库(1.8.1版本) | 编译器 | 构建工具 | cuDNN | CUDA |
| 版本说明 | 预测库(1.8.3版本) | 编译器 | 构建工具 | cuDNN | CUDA |
|:---------|:-------------------|:-------------------|:----------------|:--------|:-------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/mkl/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/open/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/mkl/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/open/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/mkl/post107/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 10.0 |
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/mkl/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/open/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/mkl/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/open/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/mkl/post107/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.4.1 | 10.0 |
### 硬件环境
......
......@@ -5,13 +5,13 @@ Install and Compile C++ Inference Library on Windows
Direct Download and Install
-------------
| Version | Inference Libraries(v1.8.1) | Compiler | Build tools | cuDNN | CUDA |
| Version | Inference Libraries(v1.8.3) | Compiler | Build tools | cuDNN | CUDA |
|:---------|:-------------------|:-------------------|:----------------|:--------|:-------|
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/mkl/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/open/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/mkl/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/open/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.1/win-infer/mkl/post107/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 10.0 |
| cpu_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/mkl/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cpu_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/open/cpu/fluid_inference_install_dir.zip) | MSVC 2015 update 3| CMake v3.16.0 |
| cuda9.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/mkl/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda9.0_cudnn7_avx_openblas | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/open/post97/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.3.1 | 9.0 |
| cuda10.0_cudnn7_avx_mkl | [fluid_inference.zip](https://paddle-wheel.bj.bcebos.com/1.8.3/win-infer/mkl/post107/fluid_inference_install_dir.zip) | MSVC 2015 update 3 | CMake v3.16.0 | 7.4.1 | 10.0 |
### Hardware Environment
......
# 混合精度训练最佳实践
Automatic Mixed Precision (AMP) 是一种自动混合使用半精度(FP16)和单精度(FP32)来加速模型训练的技术。AMP技术可方便用户快速将使用 FP32 训练的模型修改为使用混合精度训练,并通过黑白名单和动态`loss scaling`来保证训练时的数值稳定性进而避免梯度Infinite或者NaN(Not a Number)。借力于新一代NVIDIA GPU中Tensor Cores的计算性能,PaddlePaddle AMP技术在ResNet50、Transformer等模型上训练速度相对于FP32训练加速比可达1.5~2.9。
### 半精度浮点类型FP16
如图 1 所示,半精度(Float Precision16,FP16)是一种相对较新的浮点类型,在计算机中使用2字节(16位)存储。在IEEE 754-2008标准中,它亦被称作binary16。与计算中常用的单精度(FP32)和双精度(FP64)类型相比,FP16更适于在精度要求不高的场景中使用。
<figure align="center">
<img src="https://paddleweb-static.bj.bcebos.com/images/fp16.png" width="600" alt='missing'/>
<figcaption><center>图 1. 半精度和单精度数据示意图</center></figcaption>
</figure>
### 英伟达GPU的FP16算力
在使用相同的超参数下,混合精度训练使用半精度浮点(FP16)和单精度(FP32)浮点即可达到与使用纯单精度训练相同的准确率,并可加速模型的训练速度。这主要得益于英伟达推出的Volta及Turing架构GPU在使用FP16计算时具有如下特点:
* FP16可降低一半的内存带宽和存储需求,这使得在相同的硬件条件下研究人员可使用更大更复杂的模型以及更大的batch size大小。
* FP16可以充分利用英伟达Volta及Turing架构GPU提供的Tensor Cores技术。在相同的GPU硬件上,Tensor Cores的FP16计算吞吐量是FP32的8倍。
### PaddlePaddle AMP功能——牛刀小试
如前文所述,使用FP16数据类型可能会造成计算精度上的损失,但对深度学习领域而言,并不是所有计算都要求很高的精度,一些局部的精度损失对最终训练效果影响很微弱,却能使吞吐和训练速度带来大幅提升。因此,混合精度计算的需求应运而生。具体而言,训练过程中将一些对精度损失不敏感且能利用Tensor Cores进行加速的运算使用半精度处理,而对精度损失敏感部分依然保持FP32计算精度,用以最大限度提升访存和计算效率。
为了避免对每个具体模型人工地去设计和尝试精度混合的方法,PaddlePaadle框架提供自动混合精度训练(AMP)功能,解放"炼丹师"的双手。在PaddlePaddle中使用AMP训练是一件十分容易的事情,用户只需要增加一行代码即可将原有的FP32训练转变为AMP训练。下面以`MNIST`为例介绍PaddlePaddle AMP功能的使用示例。
**MNIST网络定义**
```python
import paddle.fluid as fluid
def MNIST(data, class_dim):
conv1 = fluid.layers.conv2d(data, 16, 5, 1, act=None, data_format='NHWC')
bn1 = fluid.layers.batch_norm(conv1, act='relu', data_layout='NHWC')
pool1 = fluid.layers.pool2d(bn1, 2, 'max', 2, data_format='NHWC')
conv2 = fluid.layers.conv2d(pool1, 64, 5, 1, act=None, data_format='NHWC')
bn2 = fluid.layers.batch_norm(conv2, act='relu', data_layout='NHWC')
pool2 = fluid.layers.pool2d(bn2, 2, 'max', 2, data_format='NHWC')
fc1 = fluid.layers.fc(pool2, size=64, act='relu')
fc2 = fluid.layers.fc(fc1, size=class_dim, act='softmax')
return fc2
```
针对CV(Computer Vision)类模型组网,为获得更高的训练性能需要注意如下三点:
* `conv2d``batch_norm`以及`pool2d`等需要将数据布局设置为`NHWC`,这样有助于使用TensorCore技术加速计算过程<sup><a href="#fn1" id="ref1">1</a></sup>
* Tensor Cores要求在使用FP16加速卷积运算时conv2d的输入/输出通道数为8的倍数<sup><a href="#fn2" id="ref2">2</a></sup>,因此设计网络时推荐将conv2d层的输入/输出通道数设置为8的倍数。
* Tensor Cores要求在使用FP16加速矩阵乘运算时矩阵行数和列数均为8的倍数<sup><a href="#fn3" id="ref3">3</a></sup>,因此设计网络时推荐将fc层的size参数设置为8的倍数。
**FP32 训练**
为了训练 MNIST 网络,还需要定义损失函数来更新权重参数,此处使用的优化器是SGDOptimizer。为了简化说明,这里省略了迭代训练的相关代码,仅体现损失函数及优化器定义相关的内容。
```python
import paddle
import numpy as np
data = fluid.layers.data(
name='image', shape=[None, 28, 28, 1], dtype='float32')
label = fluid.layers.data(name='label', shape=[None, 1], dtype='int64')
out = MNIST(data, class_dim=10)
loss = fluid.layers.cross_entropy(input=out, label=label)
avg_loss = fluid.layers.mean(loss)
sgd = fluid.optimizer.SGDOptimizer(learning_rate=1e-3)
sgd.minimize(avg_loss)
```
**AMP训练**
与FP32训练相比,用户仅需使用PaddlePaddle提供的`fluid.contrib.mixed_precision.decorate` 函数将原来的优化器SGDOptimizer进行封装,然后使用封装后的优化器(mp_sgd)更新参数梯度即可完成向AMP训练的转换,代码如下所示:
```python
sgd = SGDOptimizer(learning_rate=1e-3)
# 此处只需要使用fluid.contrib.mixed_precision.decorate将sgd封装成AMP训练所需的
# 优化器mp_sgd,并使用mp_sgd.minimize(avg_loss)代替原来的sgd.minimize(avg_loss)语句即可。
mp_sgd = fluid.contrib.mixed_precision.decorator.decorate(sgd)
mp_sgd.minimize(avg_loss)
```
运行上述混合精度训练python脚本时为得到更好的执行性能可配置如下环境参数,并保证cudnn版本在7.4.1及以上。
```shell
export FLAGS_conv_workspace_size_limit=1024 # MB,根据所使用的GPU显存容量及模型特点设置数值,值越大越有可能选择到更快的卷积算法
export FLAGS_cudnn_exhaustive_search=1 # 使用穷举搜索方法来选择快速卷积算法
export FLAGS_cudnn_batchnorm_spatial_persistent=1 # 用于触发batch_norm和relu的融合
```
上述即为最简单的PaddlePaddle AMP功能使用方法。ResNet50模型的AMP训练示例可[点击此处](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/image_classification/README.md#%E6%B7%B7%E5%90%88%E7%B2%BE%E5%BA%A6%E8%AE%AD%E7%BB%83)查看,其他模型使用PaddlePaddle AMP的方法也与此类似。若AMP训练过程中出现连续的loss nan等不收敛现象,可尝试使用[check nan inf工具](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/flags/check_nan_inf_cn.html#span-id-speed-span)进行调试。
### PaddlePaddle AMP功能——进阶使用
上一小节所述均为默认AMP训练行为,用户当然也可以改变一些默认的参数设置来满足特定的模型训练场景需求。接下来的章节将介绍PaddlePaddle AMP功能使用中用户可配置的参数行为,即进阶使用技巧。
#### 自定义黑白名单
PaddlePaddle AMP功能实现中根据FP16数据类型计算稳定性和加速效果在框架内部定义了算子(Op)的黑白名单。具体来说,将对FP16计算友好且能利用Tensor Cores的Op归类于白名单,将使用FP16计算会导致数值不稳定的Op归类于黑名单,将对FP16计算没有多少影响的Op归类于灰名单。然而,框架开发人员不可能考虑到所有的网络模型情况,尤其是那些特殊场景中使用到的模型。用户可以在使用`fluid.contrib.mixed_precision.decorate` 函数时通过指定自定义的黑白名单列表来改变默认的FP16计算行为。
```python
sgd = SGDOptimizer(learning_rate=1e-3)
# list1是白名单op列表,list2是黑名单op列表,list3是黑名单var_name列表(凡是以这些黑名单var_name为输入或输出的op均会被视为黑名单op)
amp_list = AutoMixedPrecisionLists(custom_white_list=list1, custom_black_list=list2, custom_black_varnames=list3)
mp_sgd = fluid.contrib.mixed_precision.decorator.decorate(sgd, amp_list)
mp_sgd.minimize(avg_loss)
```
#### 自动loss scaling
为了避免梯度Infinite或者NAN,PaddlePaddle AMP功能支持根据训练过程中梯度的数值自动调整loss scale值。用户在使用`fluid.contrib.mixed_precision.decorate` 函数时也可以改变与loss scaling相关的参数设置,示例如下:
```python
sgd = SGDOptimizer(learning_rate=1e-3)
mp_sgd = fluid.contrib.mixed_precision.decorator.decorate(sgd,
amp_lists=None,
init_loss_scaling=2**8,
incr_every_n_steps=500,
decr_every_n_nan_or_inf=4,
incr_ratio=2.0,
decr_ratio=0.5,
use_dynamic_loss_scaling=True)
mp_sgd.minimize(avg_loss)
```
`init_loss_scaling ``incr_every_n_steps` 以及`decr_every_n_nan_or_inf`等参数控制着自动loss scaling的行为。它们仅当 `use_dynamic_loss_scaling`设置为True时有效。下面详述这些参数的意义:
* init_loss_scaling(float):初始loss scaling值。
* incr_every_n_steps(int):每经过incr_every_n_steps个连续的正常梯度值才会增大loss scaling值。
* decr_every_n_nan_or_inf(int):每经过decr_every_n_nan_or_inf个连续的无效梯度值(nan或者inf)才会减小loss scaling值。
* incr_ratio(float):每次增大loss scaling值的扩增倍数,其为大于1的浮点数。
* decr_ratio(float):每次减小loss scaling值的比例系数,其为小于1的浮点数。
### 多卡GPU训练的优化
PaddlePaddle AMP功能对多卡GPU训练进行了深度优化。如图 2 所示,优化之前的参数梯度更新特点:梯度计算时虽然使用的是FP16数据类型,但是不同GPU卡之间的梯度传输数据类型仍为FP32。
<figure align="center">
<img src="https://paddleweb-static.bj.bcebos.com/images/transfer_fp32_grad.png" width="500" alt='missing'/>
<figcaption><center>图 2. 不同GPU卡之间传输梯度使用FP32数据类型(优化前)</center></figcaption>
</figure>
为了降低GPU多卡之间的梯度传输带宽,我们将梯度传输提前至`Cast`操作之前,而每个GPU卡在得到对应的FP16梯度后再执行`Cast`操作将其转变为FP32类型,具体操作详见图2。这一优化在训练大模型时对减少带宽占用尤其有效,如多卡训练BERT-Large模型。
<figure align="center">
<img src="https://paddleweb-static.bj.bcebos.com/images/transfer_fp16_grad.png" width="500" alt='missing'/>
<figcaption><center>图 3. 不同GPU卡之间传输梯度使用FP16数据类型(优化后)</center></figcaption>
</figure>
### 训练性能对比(AMP VS FP32)
PaddlePaddle AMP技术在ResNet50、Transformer等模型上训练速度相对于FP32训练上均有可观的加速比,下面是ResNet50和ERNIE Large模型的AMP训练相对于FP32训练的加速效果。
<table align="center">
<caption align="bottom"><center>图 4. Paddle AMP训练加速效果(横坐标为卡数,如8*8代表8机8卡)</center></caption>
<tr>
<td> <img src="https://paddleweb-static.bj.bcebos.com/images/resnet50.png" alt='missing'/> </td>
<td> <img src="https://paddleweb-static.bj.bcebos.com/images/ernie.png" alt='missing'/> </td>
</tr>
</table>
从图4所示的图表可以看出,ResNet50的AMP训练相对与FP32训练加速比可达$2.8 \times$以上,而ERNIE Large的AMP训练相对与FP32训练加速比亦可达 $1.7 \times -- 2.1 \times$ 。
### 参考文献
* <p> <a href="https://arxiv.org/abs/1710.03740"> Mixed Precision Training </a> </p>
* <p> <a href="https://on-demand-gtc.gputechconf.com/gtcnew/sessionview.php?sessionName=cn9312-%e4%bd%bf%e7%94%a8%e8%87%aa%e5%8a%a8%e6%b7%b7%e5%90%88%e7%b2%be%e5%ba%a6%e5%8a%a0%e9%80%9f+paddlepaddle+%e8%ae%ad%e7%bb%83"> 使用自动混合精度加速 PaddlePaddle 训练 </a> </p>
* <p id="fn1"> <a href="https://docs.nvidia.com/deeplearning/performance/dl-performance-convolutional/index.html#tensor-layout"> Tensor Layouts In Memory: NCHW vs NHWC </a> <sup> <a href="#ref1"></a> </sub> </p>
* <p id="fn2"> <a href="https://docs.nvidia.com/deeplearning/performance/dl-performance-convolutional/index.html#channels"> Channels In And Out Requirements </a> <sup> <a href="#ref2"></a> </sup> </p>
* <p id="fn3"> <a href="https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc"> Matrix-Matrix Multiplication Requirements </a> <sup> <a href="#ref3"></a> </sup> </p>
......@@ -8,6 +8,7 @@
singlenode_training_improving/training_best_practice.rst
singlenode_training_improving/memory_optimize.rst
device_switching/device_switching.md
amp/amp.md
multinode_training_improving/cpu_train_best_practice.rst
multinode_training_improving/dist_training_gpu.rst
multinode_training_improving/gpu_training_with_recompute.rst
......
......@@ -31,6 +31,7 @@ fluid.dygraph
dygraph/guard.rst
dygraph/InstanceNorm.rst
dygraph/InverseTimeDecay.rst
dygraph/jit.rst
dygraph/Layer.rst
dygraph/LayerList.rst
dygraph/LayerNorm.rst
......@@ -48,10 +49,12 @@ fluid.dygraph
dygraph/PRelu.rst
dygraph/prepare_context.rst
dygraph/ProgramTranslator.rst
dygraph/ReduceLROnPlateau.rst
dygraph/save_dygraph.rst
dygraph/Sequential.rst
dygraph/SpectralNorm.rst
dygraph/to_variable.rst
dygraph/TracedLayer.rst
dygraph/Tracer.rst
dygraph/TranslatedLayer.rst
dygraph/TreeConv.rst
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_fluid_dygraph_ReduceLROnPlateau:
ReduceLROnPlateau
-----------------
.. autoclass:: paddle.fluid.dygraph.ReduceLROnPlateau
:members:
:noindex:
.. _api_fluid_dygraph_TranslatedLayer:
TranslatedLayer
-----------------------
.. autoclass:: paddle.fluid.dygraph.TranslatedLayer
:members:
:noindex:
===
jit
===
.. toctree::
:maxdepth: 1
jit/save.rst
jit/load.rst
jit/SaveLoadConfig.rst
.. _api_fluid_dygraph_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
.. autoclass:: paddle.fluid.dygraph.jit.SaveLoadConfig
:members:
:noindex:
\ No newline at end of file
.. _api_fluid_dygraph_jit_load:
load
------------
.. autofunction:: paddle.fluid.dygraph.jit.load
:noindex:
.. _api_fluid_dygraph_jit_save:
save
------------
.. autofunction:: paddle.fluid.dygraph.jit.save
:noindex:
......@@ -10,7 +10,7 @@ python gen_doc.py --module_name "" --module_prefix "" --output fluid --output_na
python gen_module_index.py fluid fluid
# tensor
for module in math random stat
for module in math random stat linalg search
do
python gen_doc.py --module_name ${module} --module_prefix ${module} --output ${module} --output_name tensor --to_multiple_files True --output_dir tensor
python gen_module_index.py tensor.${module} ${module}
......
......@@ -27,14 +27,14 @@ API Reference
file_names.extend(glob.glob(pattern))
for file_name in sorted(file_names):
with open(file_name, 'r')as f:
with open(file_name, 'r') as f:
for i in range(2):
line = f.readline().strip()
if line.find('paddle.') != -1:
file_object.write(' '+file_name + "\n")
file_object.write(' ' + file_name + "\n")
file_names.remove(file_name)
file_object.write(' '+'fluid.rst' + "\n")
file_object.write(' ' + 'fluid.rst' + "\n")
for file_name in sorted(file_names):
if file_name not in ['index_en.rst', 'fluid.rst']:
file_object.write(' '+file_name + "\n")
if file_name not in ['index_en.rst']:
file_object.write(' ' + file_name + "\n")
......@@ -13,6 +13,7 @@ paddle.imperative
imperative/grad.rst
imperative/guard.rst
imperative/InverseTimeDecay.rst
imperative/jit.rst
imperative/load.rst
imperative/NaturalExpDecay.rst
imperative/no_grad.rst
......@@ -25,3 +26,4 @@ paddle.imperative
imperative/save.rst
imperative/to_variable.rst
imperative/TracedLayer.rst
imperative/TranslatedLayer.rst
.. _api_imperative_TranslatedLayer:
TranslatedLayer
-------------------------------
:doc_source: paddle.fluid.dygraph.io.TranslatedLayer
===
jit
===
.. toctree::
:maxdepth: 1
jit/save.rst
jit/load.rst
jit/SaveLoadConfig.rst
.. _api_imperative_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.SaveLoadConfig
.. _api_imperative_jit_load:
load
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.load
.. _api_imperative_jit_save:
save
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.save
......@@ -6,15 +6,30 @@ API Reference
:maxdepth: 1
../api_guides/index_en.rst
paddle.rst
dataset.rst
tensor.rst
nn.rst
imperative.rst
declarative.rst
optimizer.rst
metric.rst
framework.rst
imperative.rst
io.rst
utils.rst
incubate.rst
metric.rst
nn.rst
optimizer.rst
tensor.rst
fluid.rst
backward.rst
clip.rst
data/data_reader.rst
data/dataset.rst
dygraph.rst
executor.rst
fluid.rst
initializer.rst
layers.rst
metrics.rst
nets.rst
paddle.rst
profiler.rst
regularizer.rst
transpiler.rst
unique_name.rst
review_tmp.rst
......@@ -182,6 +182,7 @@ fluid.layers
layers/mul.rst
layers/multi_box_head.rst
layers/multiclass_nms.rst
layers/matrix_nms.rst
layers/multiplex.rst
layers/MultivariateNormalDiag.rst
layers/natural_exp_decay.rst
......
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_fluid_layers_matrix_nms:
matrix_nms
--------------
.. autofunction:: paddle.fluid.layers.matrix_nms
:noindex:
......@@ -49,6 +49,7 @@ paddle.nn
nn/exponential_decay.rst
nn/filter_by_instag.rst
nn/fsp_matrix.rst
nn/functional.rst
nn/gather_tree.rst
nn/gelu.rst
nn/generate_mask_labels.rst
......@@ -67,6 +68,7 @@ paddle.nn
nn/huber_loss.rst
nn/image_resize.rst
nn/image_resize_short.rst
nn/initializer.rst
nn/inverse_time_decay.rst
nn/iou_similarity.rst
nn/kldiv_loss.rst
......@@ -82,7 +84,7 @@ paddle.nn
nn/logsigmoid.rst
nn/loss.rst
nn/lrn.rst
nn/margin_rank_loss.rst
nn/matrix_nms.rst
nn/maxout.rst
nn/mse_loss.rst
nn/multiclass_nms.rst
......@@ -91,14 +93,13 @@ paddle.nn
nn/npair_loss.rst
nn/one_hot.rst
nn/pad.rst
nn/pad_constant_like.rst
nn/pad2d.rst
nn/pad_constant_like.rst
nn/ParameterList.rst
nn/piecewise_decay.rst
nn/pixel_shuffle.rst
nn/polygon_box_transform.rst
nn/polynomial_decay.rst
nn/pool2d.rst
nn/Pool2D.rst
nn/pool3d.rst
nn/prior_box.rst
......@@ -148,3 +149,5 @@ paddle.nn
nn/while_loop.rst
nn/yolo_box.rst
nn/yolov3_loss.rst
nn/functional/loss/margin_ranking_loss.rst
nn/layer/loss/MarginRankingLoss.rst
==========
functional
==========
.. toctree::
:maxdepth: 1
functional/l1_loss.rst
functional/nll_loss.rst
.. _api_nn_functional_l1_loss:
l1_loss
------
.. autoclass:: paddle.nn.functional.l1_loss
:members:
:inherited-members:
:noindex:
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_nn_functional_loss_margin_ranking_loss:
margin_ranking_loss
-------------------
.. autofunction:: paddle.nn.functional.loss.margin_ranking_loss
:noindex:
.. _api_nn_functional_nll_loss:
nll_loss
-------------------------------
.. autoclass:: paddle.nn.functional.nll_loss
:members:
:inherited-members:
:noindex:
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_fluid_optimizer_PipelineOptimizer:
.. _api_nn_layer_loss_MarginRankingLoss:
PipelineOptimizer
MarginRankingLoss
-----------------
.. autoclass:: paddle.fluid.optimizer.PipelineOptimizer
.. autoclass:: paddle.nn.layer.loss.MarginRankingLoss
:members:
:inherited-members:
:exclude-members: apply_gradients, apply_optimize, backward, load
:noindex:
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_fluid_transpiler_RoundRobin:
.. _api_nn_loss_NLLLoss:
RoundRobin
----------
NLLLoss
-------------------------------
.. autoclass:: paddle.fluid.transpiler.RoundRobin
.. autoclass:: paddle.nn.loss.NLLLoss
:members:
:inherited-members:
:noindex:
......
.. _api_nn_margin_rank_loss:
margin_rank_loss
-------------------------------
:doc_source: paddle.fluid.layers.margin_rank_loss
.. _api_nn_matrix_nms:
matrix_nms
-------------------------------
:doc_source: paddle.fluid.layers.matrix_nms
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_nn_softmax:
softmax
-------------------------------
:doc_source: paddle.fluid.layers.softmax
-------
.. autofunction:: paddle.nn.functional.softmax
:noindex:
......@@ -28,7 +28,6 @@ paddle.optimizer
optimizer/ModelAverage.rst
optimizer/Momentum.rst
optimizer/MomentumOptimizer.rst
optimizer/PipelineOptimizer.rst
optimizer/RecomputeOptimizer.rst
optimizer/RMSPropOptimizer.rst
optimizer/SGD.rst
......
......@@ -45,10 +45,7 @@ paddle
paddle/dot.rst
paddle/elementwise_add.rst
paddle/elementwise_div.rst
paddle/elementwise_equal.rst
paddle/elementwise_floordiv.rst
paddle/elementwise_max.rst
paddle/elementwise_min.rst
paddle/elementwise_mod.rst
paddle/elementwise_mul.rst
paddle/elementwise_pow.rst
......@@ -56,6 +53,7 @@ paddle
paddle/elementwise_sum.rst
paddle/enable_imperative.rst
paddle/equal.rst
paddle/equal_all.rst
paddle/erf.rst
paddle/ExecutionStrategy.rst
paddle/Executor.rst
......@@ -99,9 +97,11 @@ paddle
paddle/manual_seed.rst
paddle/matmul.rst
paddle/max.rst
paddle/maximum.rst
paddle/mean.rst
paddle/meshgrid.rst
paddle/min.rst
paddle/minimum.rst
paddle/mm.rst
paddle/mul.rst
paddle/multiplex.rst
......
......@@ -2,6 +2,6 @@
ExecutionStrategy
-------------------------------
:doc_source: paddle.framework.ExecutionStrategy
:doc_source: paddle.fluid.ExecutionStrategy
......@@ -2,6 +2,6 @@
argsort
-------------------------------
:doc_source: paddle.fluid.layers.argsort
:doc_source: paddle.tensor.argsort
......@@ -2,6 +2,6 @@
cumsum
-------------------------------
:doc_source: paddle.fluid.layers.cumsum
:doc_source: paddle.tensor.cumsum
.. _api_paddle_elementwise_equal:
elementwise_equal
-------------------------------
:doc_source: paddle.fluid.layers.equal
.. _api_paddle_elementwise_max:
elementwise_max
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_max
.. _api_paddle_elementwise_min:
elementwise_min
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_min
.. _api_paddle_equal_all
equal_all
-------------------------------
:doc_source: paddle.tensor.equal_all
......@@ -2,6 +2,6 @@
greater_equal
-------------------------------
:doc_source: paddle.fluid.layers.greater_equal
:doc_source: paddle.tensor.greater_equal
......@@ -2,6 +2,6 @@
greater_than
-------------------------------
:doc_source: paddle.fluid.layers.greater_than
:doc_source: paddle.tensor.greater_than
......@@ -2,6 +2,6 @@
less_equal
-------------------------------
:doc_source: paddle.fluid.layers.less_equal
:doc_source: paddle.tensor.less_equal
......@@ -2,6 +2,6 @@
less_than
-------------------------------
:doc_source: paddle.fluid.layers.less_than
:doc_source: paddle.tensor.less_than
......@@ -2,6 +2,6 @@
max
-------------------------------
:doc_source: paddle.fluid.layers.reduce_max
:doc_source: paddle.tensor.max
.. _api_paddle_maximum:
maximum
-------------------------------
:doc_source: paddle.tensor.maximum
......@@ -2,6 +2,6 @@
min
-------------------------------
:doc_source: paddle.fluid.layers.reduce_min
:doc_source: paddle.tensor.min
.. _api_paddle_minimum:
minimum
-------------------------------
:doc_source: paddle.tensor.minimum
......@@ -2,6 +2,6 @@
not_equal
-------------------------------
:doc_source: paddle.fluid.layers.not_equal
:doc_source: paddle.tensor.not_equal
......@@ -2,6 +2,6 @@
sort
-------------------------------
:doc_source: paddle.fluid.layers.argsort
:doc_source: paddle.tensor.sort
=================
paddle.review_tmp
=================
.. toctree::
:maxdepth: 1
review_tmp/MarginRankingLoss.rst
review_tmp/margin_ranking_loss.rst
.. _api_nn_loss_MarginRankingLoss_tmp:
MarginRankingLoss
-----------------
.. autoclass:: paddle.nn.loss.MarginRankingLoss
:members:
:inherited-members:
:noindex:
.. _api_nn_functional_margin_ranking_loss_tmp:
margin_ranking_loss
-------------------
.. autofunction:: paddle.nn.functional.margin_ranking_loss
:noindex:
......@@ -20,19 +20,18 @@ paddle.tensor
tensor/cos.rst
tensor/create_tensor.rst
tensor/crop_tensor.rst
tensor/cross.rst
tensor/cumsum.rst
tensor/diag.rst
tensor/div.rst
tensor/elementwise_add.rst
tensor/elementwise_div.rst
tensor/elementwise_equal.rst
tensor/elementwise_floordiv.rst
tensor/elementwise_max.rst
tensor/elementwise_min.rst
tensor/elementwise_mod.rst
tensor/elementwise_mul.rst
tensor/elementwise_pow.rst
tensor/elementwise_sub.rst
tensor/equal_all.rst
tensor/erf.rst
tensor/exp.rst
tensor/expand.rst
......@@ -63,8 +62,10 @@ paddle.tensor
tensor/logical_xor.rst
tensor/math.rst
tensor/max.rst
tensor/maximum.rst
tensor/mean.rst
tensor/min.rst
tensor/minimum.rst
tensor/mm.rst
tensor/mul.rst
tensor/multiplex.rst
......@@ -92,6 +93,7 @@ paddle.tensor
tensor/scatter.rst
tensor/scatter_nd.rst
tensor/scatter_nd_add.rst
tensor/search.rst
tensor/shape.rst
tensor/shard_index.rst
tensor/shuffle.rst
......
......@@ -2,6 +2,6 @@
argsort
-------------------------------
:doc_source: paddle.fluid.layers.argsort
:doc_source: paddle.tensor.argsort
.. _api_tensor_cn_cos:
cross
-------------------------------
:doc_source: paddle.tensor.cross
......@@ -2,6 +2,6 @@
cumsum
-------------------------------
:doc_source: paddle.fluid.layers.cumsum
:doc_source: paddle.tensor.cumsum
.. _api_tensor_cn_elementwise_equal:
elementwise_equal
-------------------------------
:doc_source: paddle.fluid.layers.equal
.. _api_tensor_cn_elementwise_max:
elementwise_max
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_max
.. _api_tensor_cn_elementwise_min:
elementwise_min
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_min
.. _api_tensor_cn_equal_all:
equal_all
-------------------------------
:doc_source: paddle.tensor.equal_all
......@@ -2,6 +2,6 @@
greater_equal
-------------------------------
:doc_source: paddle.fluid.layers.greater_equal
:doc_source: paddle.tensor.greater_equal
......@@ -2,6 +2,6 @@
greater_than
-------------------------------
:doc_source: paddle.fluid.layers.greater_than
:doc_source: paddle.tensor.greater_than
......@@ -2,6 +2,6 @@
less_equal
-------------------------------
:doc_source: paddle.fluid.layers.less_equal
:doc_source: paddle.tensor.less_equal
......@@ -2,6 +2,6 @@
less_than
-------------------------------
:doc_source: paddle.fluid.layers.less_than
:doc_source: paddle.tensor.less_than
......@@ -2,6 +2,6 @@
max
-------------------------------
:doc_source: paddle.fluid.layers.reduce_max
:doc_source: paddle.tensor.max
.. _api_tensor_cn_maximum:
maximum
-------------------------------
:doc_source: paddle.tensor.maximum
.. _api_tensor_cn_mean:
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_tensor_mean:
mean
-------------------------------
:doc_source: paddle.fluid.layers.mean
---------
.. autofunction:: paddle.tensor.mean
:noindex:
......@@ -2,6 +2,6 @@
min
-------------------------------
:doc_source: paddle.fluid.layers.reduce_min
:doc_source: paddle.tensor.min
.. _api_tensor_cn_minimum:
minimum
-------------------------------
:doc_source: paddle.tensor.minimum
......@@ -2,6 +2,6 @@
not_equal
-------------------------------
:doc_source: paddle.fluid.layers.not_equal
:doc_source: paddle.tensor.not_equal
.. _api_tensor_cn_ones_like:
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_tensor_ones_like:
ones_like
-------------------------------
:doc_source: paddle.fluid.layers.ones_like
---------
.. autofunction:: paddle.tensor.ones_like
:noindex:
......@@ -5,6 +5,7 @@ random
.. toctree::
:maxdepth: 1
random/rand.rst
random/randint.rst
random/randn.rst
random/randperm.rst
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_tensor_random_randn:
randn
-----
.. autofunction:: paddle.tensor.random.randn
:noindex:
......@@ -2,6 +2,6 @@
sort
-------------------------------
:doc_source: paddle.fluid.layers.argsort
:doc_source: paddle.tensor.sort
......@@ -10,4 +10,3 @@ fluid.transpiler
transpiler/HashName.rst
transpiler/memory_optimize.rst
transpiler/release_memory.rst
transpiler/RoundRobin.rst
......@@ -64,8 +64,8 @@ GradientClipByGlobalNorm
# return Parameter.name=="fc_0.w_0"
# clip = fluid.clip.GradientClipByGlobalNorm(clip_norm=1.0, need_clip=fileter_func)
sgd_optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.1)
sgd_optimizer.minimize(loss, grad_clip=clip)
sgd_optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.1, grad_clip=clip)
sgd_optimizer.minimize(loss)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
......@@ -101,5 +101,7 @@ GradientClipByGlobalNorm
# clip = fluid.clip.GradientClipByGlobalNorm(clip_norm=1.0, need_clip=fileter_func)
sgd_optimizer = fluid.optimizer.SGD(
learning_rate=0.1, parameter_list=linear.parameters())
sgd_optimizer.minimize(loss, grad_clip=clip)
\ No newline at end of file
learning_rate=0.1,
parameter_list=linear.parameters(),
grad_clip=clip)
sgd_optimizer.minimize(loss)
......@@ -15,6 +15,7 @@ fluid.dygraph
dygraph_cn/Conv2DTranspose_cn.rst
dygraph_cn/Conv3D_cn.rst
dygraph_cn/Conv3DTranspose_cn.rst
dygraph_cn/CosineAnnealingDecay_cn.rst
dygraph_cn/CosineDecay_cn.rst
dygraph_cn/declarative_cn.rst
dygraph_cn/Dropout_cn.rst
......@@ -27,11 +28,14 @@ fluid.dygraph
dygraph_cn/guard_cn.rst
dygraph_cn/InstanceNorm_cn.rst
dygraph_cn/InverseTimeDecay_cn.rst
dygraph_cn/jit_cn.rst
dygraph_cn/LambdaDecay_cn.rst
dygraph_cn/Layer_cn.rst
dygraph_cn/LayerList_cn.rst
dygraph_cn/LayerNorm_cn.rst
dygraph_cn/Linear_cn.rst
dygraph_cn/load_dygraph_cn.rst
dygraph_cn/MultiStepDecay_cn.rst
dygraph_cn/NaturalExpDecay_cn.rst
dygraph_cn/NCE_cn.rst
dygraph_cn/NoamDecay_cn.rst
......@@ -44,10 +48,13 @@ fluid.dygraph
dygraph_cn/PRelu_cn.rst
dygraph_cn/prepare_context_cn.rst
dygraph_cn/ProgramTranslator_cn.rst
dygraph_cn/ReduceLROnPlateau_cn.rst
dygraph_cn/save_dygraph_cn.rst
dygraph_cn/Sequential_cn.rst
dygraph_cn/SpectralNorm_cn.rst
dygraph_cn/StepDecay_cn.rst
dygraph_cn/to_variable_cn.rst
dygraph_cn/TracedLayer_cn.rst
dygraph_cn/Tracer_cn.rst
dygraph_cn/TranslatedLayer_cn.rst
dygraph_cn/TreeConv_cn.rst
......@@ -46,7 +46,7 @@ Conv2D
参数:
- **num_channels** (int) - 输入图像的通道数。
- **num_fliters** (int) - 滤波器的个数,和输出特征图个数相同。
- **num_filters** (int) - 滤波器的个数,和输出特征图个数相同。
- **filter_size** (int|tuple) - 滤波器大小。如果 ``filter_size`` 是一个元组,则必须包含两个整型数,分别表示滤波器高度和宽度。否则,表示滤波器高度和宽度均为 ``filter_size`` 。
- **stride** (int|tuple, 可选) - 步长大小。如果 ``stride`` 为元组,则必须包含两个整型数,分别表示垂直和水平滑动步长。否则,表示垂直和水平滑动步长均为 ``stride`` 。默认值:1。
- **padding** (int|tuple, 可选) - 填充大小。如果 ``padding`` 为元组,则必须包含两个整型数,分别表示竖直和水平边界填充大小。否则,表示竖直和水平边界填充大小均为 ``padding`` 。默认值:0。
......
.. _cn_api_fluid_dygraph_LambdaDecay:
LambdaDecay
-------------------------------
.. py:class:: paddle.fluid.dygraph.LambdaDecay(learning_rate, lr_lambda)
:api_attr: 命令式编程模式(动态图)
该API提供 lambda函数 设置学习率的功能。 ``lr_lambda`` 为一个lambda函数,其通过 ``epoch`` 计算出一个因子,该因子会乘以初始学习率。
算法可以描述为:
.. code-block:: text
learning_rate = 0.5 # init learning_rate
lr_lambda = lambda epoch: 0.95 ** epoch
learning_rate = 0.5 # epoch 0
learning_rate = 0.475 # epoch 1
learning_rate = 0.45125 # epoch 2
参数:
- **learning_rate** (float|int) - 初始化的学习率。可以是Python的float或int。
- **lr_lambda** (function) - ``lr_lambda`` 为一个lambda函数,其通过 ``epoch`` 计算出一个因子,该因子会乘以初始学习率。
返回: 无
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
linear = fluid.dygraph.Linear(10, 10)
input = fluid.dygraph.to_variable(x)
scheduler = fluid.dygraph.LambdaDecay(0.5, lr_lambda=lambda x: 0.95**x)
adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())
for epoch in range(6):
for batch_id in range(5):
out = linear(input)
loss = fluid.layers.reduce_mean(out)
adam.minimize(loss)
scheduler.epoch()
print("epoch:%d, current lr is %f" .format(epoch, adam.current_step_lr()))
# epoch:0, current lr is 0.5
# epoch:1, current lr is 0.475
# epoch:2, current lr is 0.45125
.. py:method:: epoch(epoch=None)
通过当前的 epoch 调整学习率,调整后的学习率将会在下一次调用 ``optimizer.minimize`` 时生效。
参数:
- **epoch** (int|float,可选) - 类型:int或float。指定当前的epoch数。默认:无,此时将会自动累计epoch数。
返回:
**代码示例**:
参照上述示例代码。
......@@ -256,6 +256,87 @@ hook(Layer, input, output) -> None or modified output
for prefix, layer in model.named_sublayers():
print(prefix, layer)
.. py:method:: register_buffer(name, variable, persistable=True)
将一个Variable注册为buffer
buffer是一个非参数类型的变量,不会被优化器更新,但在评估或预测阶段可能是必要的状态变量。比如 ``BatchNorm`` 中的均值和方差。
注册的buffer默认是可持久性的,会被保存到 ``state_dict`` 中。如果指定 ``persistable`` 参数为False,则会注册一个非持久性的buffer,即不会同步和保存到 ``state_dict`` 中。
参数:
- **name** (str) - 注册buffer的名字。可以通过此名字来访问已注册的buffer
- **variable** (Variable) - 将被注册为buffer的变量。
- **persistable** (bool, 可选) - 注册的buffer是否需要可持久性地保存到 ``state_dict`` 中。
返回:None
返回类型:None
**代码示例**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.Linear(10, 3)
value = np.array([0]).astype("float32")
buffer = fluid.dygraph.to_variable(value)
linear.register_buffer("buf_name", buffer, persistable=True)
# get the buffer by attribute.
print(linear.buf_name)
.. py:method:: buffers(include_sublayers=True)
返回一个由当前层及其子层的所有buffers组成的列表。
参数:
- **include_sublayers** (bool, 可选) - 是否返回子层的buffers。如果为True,返回的列表中包含子层的buffers。默认值:True
返回:一个由当前层及其子层的所有buffers组成的列表,列表中的元素类型为Variable
返回类型:list
.. py:method:: named_buffers(prefix='', include_sublayers=True)
返回层中所有buffers的迭代器,生成名称和buffer的元组。
参数:
- **prefix** (str, 可选) - 在所有buffer名称前加的前缀。默认值:''
- **include_sublayers** (bool, 可选) - 是否返回子层的buffers。如果为True,返回的列表中包含子层的buffers。默认值:True
返回:产出名称和buffer的元组的迭代器。
返回类型:iterator
**代码示例**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
with fluid.dygraph.guard():
fc1 = fluid.Linear(10, 3)
buffer1 = fluid.dygraph.to_variable(np.array([0]).astype("float32"))
# register a variable as buffer by specific `persistable`
fc1.register_buffer("buf_name_1", buffer1, persistable=True)
fc2 = fluid.Linear(3, 10)
buffer2 = fluid.dygraph.to_variable(np.array([1]).astype("float32"))
# register a buffer by assigning an attribute with Variable.
# The `persistable` can only be False by this way.
fc2.buf_name_2 = buffer2
model = fluid.dygraph.Sequential(fc1, fc2)
# get all named buffers
for name, buffer in model.named_buffers():
print(name, buffer)
.. py:method:: forward(*inputs, **kwargs)
定义每次调用时执行的计算。应该被所有子类覆盖。
......@@ -290,13 +371,13 @@ hook(Layer, input, output) -> None or modified output
.. py:method:: state_dict(destination=None, include_sublayers=True)
获取当前层及其子层的所有参数。并将所有参数存放在dict结构中。
获取当前层及其子层的所有参数和可持久性buffers。并将所有参数和buffers存放在dict结构中。
参数:
- **destination** (dict, 可选) - 如果提供 ``destination`` ,则所有参数都将存放在 ``destination`` 中。 默认值:None
- **include_sublayers** (bool, 可选) - 如果设置为True,则包括子层的参数。默认值:True
- **destination** (dict, 可选) - 如果提供 ``destination`` ,则所有参数和可持久性buffers都将存放在 ``destination`` 中。 默认值:None
- **include_sublayers** (bool, 可选) - 如果设置为True,则包括子层的参数buffers。默认值:True
返回:包含所有参数的dict
返回:包含所有参数和可持久行buffersdict
返回类型:dict
......@@ -312,11 +393,11 @@ hook(Layer, input, output) -> None or modified output
.. py:method:: set_dict(stat_dict, include_sublayers=True)
根据传入的 ``stat_dict`` 设置参数 所有参数将由 ``stat_dict`` 中的 ``Tensor`` 设置。
根据传入的 ``stat_dict`` 设置参数和可持久性buffers 所有参数和buffers将由 ``stat_dict`` 中的 ``Tensor`` 设置。
参数:
- **state_dict** (dict) - 包含所有参数的dict
- **include_sublayers** (bool, 可选) - 如果设置为True,则还包括子层的参数。 默认值:True
- **state_dict** (dict) - 包含所有参数和可持久性buffersdict
- **include_sublayers** (bool, 可选) - 如果设置为True,则还包括子层的参数buffers 默认值:True
返回:None
......@@ -337,11 +418,11 @@ hook(Layer, input, output) -> None or modified output
.. warning::
该函数将被弃用。请使用set_dict函数。
根据传入的 ``stat_dict`` 设置参数 所有参数将由 ``stat_dict`` 中的 ``Tensor`` 设置。
根据传入的 ``stat_dict`` 设置参数和可持久性buffers 所有参数和buffers将由 ``stat_dict`` 中的 ``Tensor`` 设置。
参数:
- **state_dict** (dict) - 包含所有参数的dict
- **include_sublayers** (bool, 可选) - 如果设置为True,则还包括子层的参数。 默认值:True
- **state_dict** (dict) - 包含所有参数和可持久性buffersdict
- **include_sublayers** (bool, 可选) - 如果设置为True,则还包括子层的参数buffers 默认值:True
返回:None
......
.. _cn_api_fluid_dygraph_MultiStepDecay:
MultiStepDecay
-------------------------------
.. py:class:: paddle.fluid.dygraph.MultiStepDecay(learning_rate, milestones, decay_rate=0.1)
:api_attr: 命令式编程模式(动态图)
该接口提供 ``MultiStep`` 衰减学习率的功能。
算法可以描述为:
.. code-block:: text
learning_rate = 0.5
milestones = [30, 50]
decay_rate = 0.1
if epoch < 30:
learning_rate = 0.5
elif epoch < 50:
learning_rate = 0.05
else:
learning_rate = 0.005
参数:
- **learning_rate** (float|int) - 初始化的学习率。可以是Python的float或int。
- **milestones** (tuple|list) - 列表或元组。必须是递增的。
- **decay_rate** (float, optional) - 学习率的衰减率。 ``new_lr = origin_lr * decay_rate`` 。其值应该小于1.0。默认:0.1。
返回: 无
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
linear = fluid.dygraph.Linear(10, 10)
input = fluid.dygraph.to_variable(x)
scheduler = fluid.dygraph.MultiStepDecay(0.5, milestones=[3, 5])
adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())
for epoch in range(6):
for batch_id in range(5):
out = linear(input)
loss = fluid.layers.reduce_mean(out)
adam.minimize(loss)
scheduler.epoch()
print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr()))
# epoch:0, current lr is 0.5
# epoch:1, current lr is 0.5
# epoch:2, current lr is 0.5
# epoch:3, current lr is 0.05
# epoch:4, current lr is 0.05
# epoch:5, current lr is 0.005
.. py:method:: epoch(epoch=None)
通过当前的 epoch 调整学习率,调整后的学习率将会在下一次调用 ``optimizer.minimize`` 时生效。
参数:
- **epoch** (int|float,可选) - 类型:int或float。指定当前的epoch数。默认:无,此时将会自动累计epoch数。
返回:
**代码示例**:
参照上述示例代码。
......@@ -45,7 +45,6 @@ NCE
words.append(fluid.dygraph.base.to_variable(inp_word[i]))
emb = fluid.Embedding(
'embedding',
size=[dict_size, 32],
param_attr='emb.w',
is_sparse=False)
......@@ -70,7 +69,7 @@ NCE
bias_attr='nce.b')
wl = fluid.layers.unsqueeze(words[label_word], axes=[0])
nce_loss3 = nce(embs3, words[label_word])
nce_loss3 = nce(embs3, wl)
属性
::::::::::::
......
......@@ -42,7 +42,7 @@ NaturalExpDecay
- **staircase** (bool,可选) - 若为True, 学习率变化曲线呈阶梯状,若为False,学习率变化值曲线为平滑的曲线。默认值为False。
- **begin** (int,可选) – 起始步,即以上运算式子中global_step的初始化值。默认值为0。
- **step** (int,可选) – 步大小,即以上运算式子中global_step的每次的增量值。默认值为1。
- **dtype** – (str,可选) 初始化学习率变量的数据类型,可以为"float32", "float64"。默认值为"float32"。
- **dtype** (str,可选) – 学习率值的数据类型,可以为"float32", "float64"。默认值为"float32"。
返回: 无
......@@ -53,12 +53,14 @@ NaturalExpDecay
import paddle.fluid as fluid
base_lr = 0.1
with fluid.dygraph.guard():
emb = fluid.dygraph.Embedding([10, 10])
sgd_optimizer = fluid.optimizer.SGD(
learning_rate=fluid.dygraph.NaturalExpDecay(
learning_rate=base_lr,
decay_steps=10000,
decay_rate=0.5,
staircase=True))
staircase=True),
parameter_list=emb.parameters())
......
......@@ -55,10 +55,8 @@ PolynomialDecay
total_step = 5000
end_lr = 0
with fluid.dygraph.guard():
emb = fluid.dygraph.Embedding( [10, 10])
optimizer = fluid.optimizer.SGD(
learning_rate = fluid.dygraph.PolynomialDecay(
start_lr, total_step, end_lr, power=1.0) )
start_lr, total_step, end_lr, power=1.0),
parameter_list = emb.parameters())
......@@ -3,7 +3,7 @@
Pool2D
-------------------------------
.. py:class:: paddle.fluid.dygraph.Pool2D(pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, exclusive=True)
.. py:class:: paddle.fluid.dygraph.Pool2D(pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, exclusive=True, data_format="NCHW")
:alias_main: paddle.nn.Pool2D
:alias: paddle.nn.Pool2D,paddle.nn.layer.Pool2D,paddle.nn.layer.common.Pool2D
......@@ -13,7 +13,7 @@ Pool2D
该接口用于构建 ``Pool2D`` 类的一个可调用对象,具体用法参照 ``代码示例`` 。其将在神经网络中构建一个二维池化层,并使用上述输入参数的池化配置,为二维空间池化操作,根据 ``input`` , 池化类型 ``pool_type`` , 池化核大小 ``pool_size`` , 步长 ``pool_stride`` ,填充 ``pool_padding`` 这些参数得到输出。
输入X和输出Out是NCHW格式,N为批大小,C是通道数,H是特征高度,W是特征宽度。参数( ``ksize``, ``strides``, ``paddings`` )含有两个整型元素。分别表示高度和宽度上的参数。输入X的大小和输出Out的大小可能不一致。
输入X和输出Out默认是NCHW格式,N为批大小,C是通道数,H是特征高度,W是特征宽度。参数( ``ksize``, ``strides``, ``paddings`` )含有两个整型元素。分别表示高度和宽度上的参数。输入X的大小和输出Out的大小可能不一致。
例如:
......@@ -66,13 +66,15 @@ Pool2D
- **use_cudnn** (bool, 可选)- 是否用cudnn核,只有已安装cudnn库时才有效。默认True。
- **ceil_mode** (bool, 可选)- 是否用ceil函数计算输出高度和宽度。如果设为False,则使用floor函数。默认为False。
- **exclusive** (bool, 可选) - 是否在平均池化模式忽略填充值。默认为True。
- **data_format** (str,可选) - 指定输入的数据格式,输出的数据格式将与输入保持一致,可以是"NCHW"和"NHWC"。N是批尺寸,C是通道数,H是特征高度,W是特征宽度。默认值:"NCHW"。
返回:无
抛出异常:
- ``ValueError`` - 如果 ``pool_type`` 既不是“max”也不是“avg”
- ``ValueError`` - 如果 ``global_pooling`` 为False并且‘pool_size’为-1
- ``ValueError`` - 如果 ``use_cudnn`` 不是bool值
- ``ValueError`` - 如果 ``pool_type`` 既不是“max”也不是“avg”。
- ``ValueError`` - 如果 ``global_pooling`` 为False并且 ``pool_size`` 为-1。
- ``ValueError`` - 如果 ``use_cudnn`` 不是bool值。
- ``ValueError`` - 如果 ``data_format`` 既不是"NCHW"也不是"NHWC"。
**代码示例**
......@@ -80,9 +82,10 @@ Pool2D
import paddle.fluid as fluid
from paddle.fluid.dygraph.base import to_variable
import numpy as np
with fluid.dygraph.guard():
data = numpy.random.random((3, 32, 32, 5)).astype('float32')
data = np.random.random((3, 32, 32, 5)).astype('float32')
pool2d = fluid.dygraph.Pool2D(pool_size=2,
pool_type='max',
pool_stride=1,
......
.. _cn_api_fluid_dygraph_ReduceLROnPlateau:
ReduceLROnPlateau
-------------------------------
**注意:该API仅支持【动态图】模式**
.. py:class:: paddle.fluid.dygraph.ReduceLROnPlateau(learning_rate, mode='min', decay_rate=0.1, patience=10, verbose=False, threshold=1e-4, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-8, dtype='float32')
该API为 ``loss`` 自适应的学习率衰减策略。默认情况下,当 ``loss`` 停止下降时,降低学习率(如果将 ``mode`` 设置为 `'max'` ,此时判断逻辑相反, ``loss`` 停止上升时降低学习率)。其思想是:一旦模型表现不再提升,将学习率降低2-10倍对模型的训练往往有益。
``loss`` 是传入到该类方法 ``step`` 中的参数,其必须是shape为[1]的1-D Tensor。 如果 ``loss`` 停止下降(``mode`` 为 `min` 时)超过 ``patience`` 个epoch,学习率将会减小为
`learning_rate * decay_rate` 。
此外,每降低一次学习率后,将会进入一个时长为 ``cooldown`` 个epoch的冷静期,在冷静期内,将不会监控 ``loss`` 的变化情况,也不会衰减。
在冷静期之后,会继续监控 ``loss`` 的上升或下降。
参数:
- **learning_rate** (Variable|float|int) - 初始学习率。其类型可以是Python的float类型,如果输入int类型则会被转为float类型。其也可以是shape为[1]的
1-D Tensor,且相应数据类型必须为 "float32" 或 "float64" 。
- **mode** (str,可选) - `'min'` 和 `'max'` 之一。通常情况下,为 `'min'` ,此时当 ``loss`` 停止下降时学习率将减小。默认:`'min'` 。
(注意:仅在特殊用法时,可以将其设置为 `'max'` ,此时判断逻辑相反, ``loss`` 停止上升学习率才减小)
- **decay_rate** (float,可选) - 学习率衰减的比例。`new_lr = origin_lr * decay_rate` ,它是值小于1.0的float型数字,默认: 0.1。
- **patience** (int,可选) - 当 ``loss`` 连续 ``patience`` 个epoch没有下降(mode: 'min')或上升(mode: 'max')时,学习率才会减小。默认:10。
- **verbose** (bool,可选) - 如果为 ``True`` , 会在每次更新optimizer中的learning_rate时,打印信息。默认:``False`` 。
- **threshold** (float,可选) - ``threshold`` 和 ``threshold_mode`` 两个参数将会决定 ``loss`` 最小变化的阈值。小于该阈值的变化
将会被忽视。默认:1e-4。
- **threshold_mode** (str,可选) - `'rel'` 和 `'abs'` 之一。在 `'rel'` 模式下, ``loss`` 最小变化的阈值是 `last_loss * threshold` ,
其中 ``last_loss`` 是 ``loss`` 在上个epoch的值。在 `'abs'` 模式下,``loss`` 最小变化的阈值是 `threshold` 。 默认:`'rel'`。
- **cooldown** (int,可选) - 在学习速率每次减小之后,会进入时长为 ``cooldown`` 个epoch的冷静期。默认:0。
- **min_lr** (float,可选) - 最小的学习率。减小后的学习率最低下界限。默认:0。
- **eps** (float,可选) - 如果新旧学习率间的差异小于 ``eps`` ,则不会更新。默认值:1e-8。
- **dtype** (str,可选) – 学习率值的数据类型,可以为"float32", "float64"。默认:"float32"。
返回: ``loss`` 自适应的学习率
返回类型:Variable
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
linear = fluid.dygraph.Linear(10, 10)
input = fluid.dygraph.to_variable(x)
adam = fluid.optimizer.Adam(
learning_rate = fluid.dygraph.ReduceLROnPlateau(
learning_rate = 1.0,
decay_rate = 0.5,
patience = 5,
verbose = True,
cooldown = 3),
parameter_list = linear.parameters())
for epoch in range(10):
total_loss = 0
for bath_id in range(5):
out = linear(input)
loss = fluid.layers.reduce_mean(out)
total_loss += loss
adam.minimize(loss)
avg_loss = total_loss/5
# 根据传入total_loss,调整学习率
reduce_lr.step(avg_loss)
lr = adam.current_step_lr()
print("current avg_loss is %s, current lr is %s" % (avg_loss.numpy()[0], lr))
.. py:method:: step(loss)
需要在每个epoch调用该方法,其根据传入的 ``loss`` 调整optimizer中的学习率,调整后的学习率将会在下一次调用 ``optimizer.minimize`` 时生效。
参数:
- **loss** (Variable) - 类型:Variable,shape为[1]的1-D Tensor。将被用来判断是否需要降低学习率。如果 ``loss`` 连续 ``patience`` 个epochs没有下降,
将会降低学习率。
返回:
**代码示例**:
参照其类中的说明。
.. _cn_api_fluid_dygraph_StepDecay:
StepDecay
-------------------------------
.. py:class:: paddle.fluid.dygraph.StepDecay(learning_rate, step_size, decay_rate=0.1)
:api_attr: 命令式编程模式(动态图)
该接口提供 ``step_size`` 衰减学习率的功能,每经过 ``step_size`` 个 ``epoch`` 时会通过 ``decay_rate`` 衰减一次学习率。
算法可以描述为:
.. code-block:: text
learning_rate = 0.5
step_size = 30
decay_rate = 0.1
learning_rate = 0.5 if epoch < 30
learning_rate = 0.05 if 30 <= epoch < 60
learning_rate = 0.005 if 60 <= epoch < 90
...
参数:
- **learning_rate** (float|int) - 初始化的学习率。可以是Python的float或int。
- **step_size** (int) - 学习率每衰减一次的间隔。
- **decay_rate** (float, optional) - 学习率的衰减率。 ``new_lr = origin_lr * decay_rate`` 。其值应该小于1.0。默认:0.1。
返回: 无
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
linear = fluid.dygraph.Linear(10, 10)
input = fluid.dygraph.to_variable(x)
scheduler = fluid.dygraph.StepDecay(0.5, step_size=3)
adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())
for epoch in range(9):
for batch_id in range(5):
out = linear(input)
loss = fluid.layers.reduce_mean(out)
adam.minimize(loss)
scheduler.epoch()
print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr()))
# epoch:0, current lr is 0.5
# epoch:1, current lr is 0.5
# epoch:2, current lr is 0.5
# epoch:3, current lr is 0.05
# epoch:4, current lr is 0.05
# epoch:5, current lr is 0.05
# epoch:6, current lr is 0.005
# epoch:7, current lr is 0.005
# epoch:8, current lr is 0.005
.. py:method:: epoch(epoch=None)
通过当前的 epoch 调整学习率,调整后的学习率将会在下一次调用 ``optimizer.minimize`` 时生效。
参数:
- **epoch** (int|float,可选) - 类型:int或float。指定当前的epoch数。默认:无,此时将会自动累计epoch数。
返回:
**代码示例**:
参照上述示例代码。
.. _cn_api_fluid_dygraph_TranslatedLayer:
TranslatedLayer
-------------------------------
.. py:class:: paddle.fluid.dygraph.TranslatedLayer(programs, persistable_vars)
``TranslatedLayer`` 是一个命令式编程模式 :ref:`cn_api_fluid_dygraph_Layer` 的继承类,
通过 :ref:`cn_api_fluid_dygraph_jit_load` 载入构建。能够像一般 ``Layer`` 一样在train或者eval模式下使用。
.. note::
``TranslatedLayer`` 对象不能够通过构造函数创建,仅能够通过 :ref:`cn_api_fluid_dygraph_jit_load` 接口载入构建。
**示例代码:**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
class LinearNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(LinearNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
return self._linear(x)
# 开启命令式编程模式
fluid.enable_dygraph()
# 1. 训练存储模型.
# 创建网络
net = LinearNet(784, 1)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# 训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
net.clear_gradients()
model_path = "linear.example.model"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[img])
# 2. 载入模型构建TranslatedLayer
translated_layer = fluid.dygraph.jit.load(model_path)
# 预测
translated_layer.eval()
x = fluid.dygraph.to_variable(np.random.random((1, 784)).astype('float32'))
pred = translated_layer(x)
# fine-tune训练
translated_layer.train()
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=translated_layer.parameters())
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = translated_layer(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
translated_layer.clear_gradients()
===
jit
===
.. toctree::
:maxdepth: 1
jit_cn/save_cn.rst
jit_cn/load_cn.rst
jit_cn/SaveLoadConfig_cn.rst
.. _cn_api_fluid_dygraph_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
.. py:class:: paddle.fluid.dygraph.jit.SaveLoadConfig()
用于配置接口 :ref:`cn_api_fluid_dygraph_jit_save` 和 :ref:`cn_api_fluid_dygraph_jit_load` 存储载入 :ref:`cn_api_fluid_dygraph_TranslatedLayer` 时的附加选项。
**示例代码:**
1. 在存储模型时使用 ``SaveLoadConfig``
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
# 在存储模型时使用SaveLoadConfig
model_path = "simplenet.example.model"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.model_filename = "__simplenet__"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
2. 在载入模型时使用 ``SaveLoadConfig``
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
# 开启命令式编程模式
fluid.enable_dygraph()
# 在载入模型时使用SaveLoadconfig
model_path = "simplenet.example.model"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.model_filename = "__simplenet__"
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
# 预测
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
属性
::::::::::::
.. py:attribute:: output_spec
选择保存模型( :ref:`cn_api_fluid_dygraph_TranslatedLayer` )的输出变量,通过指定的这些变量能够使模型仅计算特定的结果。
默认情况下,原始 :ref:`cn_api_fluid_dygraph_Layer` 的forward方法的所有返回变量都将配置为存储后模型 :ref:`cn_api_fluid_dygraph_TranslatedLayer` 的输出变量。
``output_spec`` 属性类型需要是 ``list[Variable]``。如果输入的 ``output_spec`` 列表不是原始 :ref:`cn_api_fluid_dygraph_Layer` 的forward方法的所有返回变量,
将会依据输入的 ``output_spec`` 列表对存储的模型进行裁剪。
.. note::
``output_spec`` 属性仅在存储模型时使用。
**示例代码:**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
loss = fluid.layers.mean(z)
return z, loss
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out, loss = net(x)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
# 使用SaveLoadconfig.output_spec
model_path = "simplenet.example.model.output_spec"
configs = fluid.dygraph.jit.SaveLoadConfig()
# 仅在存储模型中保留预测结果,丢弃loss
configs.output_spec = [out]
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
# 仅有预测结果输出
pred = infer_net(x)
.. py:attribute:: model_filename
存储转写 :ref:`cn_api_fluid_dygraph_Layer` 模型结构 ``Program`` 的文件名称。默认文件名为 ``__model__``。
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
model_path = "simplenet.example.model.model_filename"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.model_filename = "__simplenet__"
# 配置configs.model_filename存储模型
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
# [结果] 存储模型目录文件包括:
# __simplenet__ __variables__ __variables.info__
# 配置configs.model_filename载入模型
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
.. py:attribute:: params_filename
存储转写 :ref:`cn_api_fluid_dygraph_Layer` 所有持久参数(包括 ``Parameters`` 和持久的 ``Buffers``)的文件名称。默认文件名称为 ``__variable__``。
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
model_path = "simplenet.example.model.params_filename"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.params_filename = "__params__"
# 配置configs.params_filename存储模型
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
# [结果] 存储模型目录文件包括:
# __model__ __params__ __variables.info__
# 配置configs.params_filename载入模型
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
.. py:attribute:: separate_params
配置是否将 :ref:`cn_api_fluid_dygraph_Layer` 的参数存储为分散的文件。
(这是为了兼容接口 :ref:`cn_api_fluid_io_save_inference_model` 的行为)
如果设置为 ``True`` ,每个参数将会被存储为一个文件,文件名为参数名,同时``SaveLoadConfig.params_filename`` 指定的文件名将不会生效。默认为 ``False``。
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
model_path = "simplenet.example.model.separate_params"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.separate_params = True
# 配置configs.separate_params存储模型
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
# [结果] 存储模型目录文件包括:
# linear_0.b_0 linear_0.w_0 __model__ __variables.info__
# 配置configs.params_filename载入模型
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
.. _cn_api_fluid_dygraph_jit_load:
load
-----------------
.. py:function:: paddle.fluid.dygraph.jit.load(model_path, configs=None)
:api_attr: 命令式编程模式(动态图)
将接口 :ref:`cn_api_fluid_dygraph_jit_save` 或者 :ref:`cn_api_fluid_io_save_inference_model` 存储的模型载入为 :ref:`cn_api_fluid_dygraph_TranslatedLayer` ,用于预测推理或者fine-tune训练。
.. note::
由于一些历史原因,如果载入的模型是通过 :ref:`cn_api_fluid_io_save_inference_model` 存储的,
在使用它进行fine-tune训练时会存在一些局限:
1. 命令式编程模式不支持 ``LoDTensor`` ,所有原先输入变量或者参数依赖于LoD信息的模型暂时无法使用;
2. 所有存储模型的feed变量都需要被传入 ``Translatedlayer`` 的forward方法;
3. 原模型变量的 ``stop_gradient`` 信息已丢失且无法准确恢复;
4. 原模型参数的 ``trainable`` 信息已丢失且无法准确恢复。
参数:
- **model_path** (str) - 存储模型的目录。
- **configs** (SaveLoadConfig, 可选) - 用于指定额外配置选项的 :ref:`cn_api_fluid_dygraph_jit_SaveLoadConfig` 对象。默认为 ``None``。
返回:TranslatedLayer - 一个能够执行存储模型的 ``Layer`` 对象。
**示例代码**
1. 载入由接口 :ref:`cn_api_fluid_dygraph_jit_save` 存储的模型进行预测推理及fine-tune训练。
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
class LinearNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(LinearNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
return self._linear(x)
# 开启命令式编程模式
fluid.enable_dygraph()
# 1. 训练存储模型.
# 创建网络
net = LinearNet(784, 1)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# 训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
net.clear_gradients()
model_path = "linear.example.model"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[img])
# 2. 载入模型 & 预测
# 载入模型
infer_net = fluid.dygraph.jit.load(model_path)
# 预测
x = fluid.dygraph.to_variable(np.random.random((1, 784)).astype('float32'))
pred = infer_net(x)
# 3. 载入模型 & fine-tune训练
# 载入模型
train_net = fluid.dygraph.jit.load(model_path)
train_net.train()
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=train_net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# fine-tune训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = train_net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
train_net.clear_gradients()
2. 载入由接口 :ref:`cn_api_fluid_io_save_inference_model` 存储的模型进行预测推理及fine-tune训练。
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
img = fluid.data(name='img', shape=[None, 784], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
pred = fluid.layers.fc(input=img, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=pred, label=label)
avg_loss = fluid.layers.mean(loss)
optimizer = fluid.optimizer.SGD(learning_rate=0.001)
optimizer.minimize(avg_loss)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
loader = fluid.io.DataLoader.from_generator(
feed_list=[img, label], capacity=5, iterable=True)
loader.set_batch_generator(random_batch_reader(), places=place)
# 1. 训练 & 存储预测模型
for data in loader():
exe.run(
fluid.default_main_program(),
feed=data,
fetch_list=[avg_loss])
model_path = "fc.example.model"
fluid.io.save_inference_model(
model_path, ["img"], [pred], exe)
# 开启命令式编程模式
fluid.enable_dygraph()
# 2. 载入模型 & 预测
fc = fluid.dygraph.jit.load(model_path)
x = fluid.dygraph.to_variable(np.random.random((1, 784)).astype('float32'))
pred = fc(x)
# 3. 载入模型 & fine-tune训练
fc = fluid.dygraph.jit.load(model_path)
fc.train()
sgd = fluid.optimizer.SGD(learning_rate=0.001,
parameter_list=fc.parameters())
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(
random_batch_reader(), places=place)
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = fc(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
sgd.minimize(avg_loss)
.. _cn_api_fluid_dygraph_jit_save:
save
-----------------
.. py:function:: paddle.fluid.dygraph.jit.save(layer, model_path, input_spec=None, configs=None)
将输入的经过 ``@declarative`` 装饰的 :ref:`cn_api_fluid_dygraph_Layer` 存储为 :ref:`cn_api_fluid_dygraph_TranslatedLayer` 格式的模型,
载入后可用于预测推理或者fine-tune训练。
该接口将会将输入 :ref:`cn_api_fluid_dygraph_Layer` 转写后的模型结构 ``Program`` 和所有必要的持久参数变量存储至输入路径 ``model_path`` 中。
默认存储的 ``Program`` 文件名为 ``__model__``, 默认存储持久参数变量的文件名为 ``__variables__``,
同时会将变量的一些描述信息存储至文件 ``__variables.info__``,这些额外的信息将在fine-tune训练中使用。
存储的模型能够被以下API载入使用:
- :ref:`cn_api_fluid_dygraph_jit_load`
- :ref:`cn_api_fluid_io_load_inference_model` (需要配置参数 ``params_filename='__variables__'`` )
- 其他预测库API
参数:
- **layer** (Layer) - 需要存储的 :ref:`cn_api_fluid_dygraph_Layer` 对象。输入的 ``Layer`` 需要经过 ``@declarative`` 装饰。
- **model_path** (str) - 存储模型的目录。
- **input_spec** (list[Variable], 可选) - 描述存储模型的输入。此参数是传入当前存储的 ``TranslatedLayer`` forward方法的一个示例输入。如果为 ``None`` ,所有原 ``Layer`` forward方法的输入变量将都会被配置为存储模型的输入变量。默认为 ``None``。
- **configs** (SaveLoadConfig, 可选) - 用于指定额外配置选项的 :ref:`cn_api_fluid_dygraph_jit_SaveLoadConfig` 对象。默认为 ``None``。
返回:无
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
class LinearNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(LinearNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
return self._linear(x)
# 开启命令式编程模式
fluid.enable_dygraph()
# 创建网络
net = LinearNet(784, 1)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# 训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
net.clear_gradients()
# 存储模型
model_path = "linear.example.model"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[img])
......@@ -4,30 +4,33 @@ no_grad
-------------------------------
.. py:method:: paddle.fluid.dygraph.no_grad(func=None)
.. py:class:: paddle.fluid.dygraph.no_grad
:api_attr: 命令式编程模式(动态图)
:alias_main: paddle.no_grad
:alias: paddle.no_grad
:old_api: paddle.fluid.dygraph.no_grad
创建一个上下文来禁用动态图梯度计算。在此模式下,每次计算的结果都将具有stop_gradient=True。
也可以用作一个装饰器(确保不要用括号来初始化)。
也可以用作一个装饰器(需要创建实例对象作为装饰器)。
**代码示例**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
paddle.enable_imperative()
# 用作生成器
data = np.array([[2, 3], [4, 5]]).astype('float32')
with fluid.dygraph.guard():
l0 = fluid.Linear(2, 2) # l0.weight.gradient() is None
l1 = fluid.Linear(2, 2)
with fluid.dygraph.no_grad():
with fluid.no_grad():
# l1.weight.stop_gradient is False
tmp = l1.weight * 2 # tmp.stop_gradient is True
x = fluid.dygraph.to_variable(data)
......@@ -38,9 +41,9 @@ no_grad
print(l0.weight.gradient() is None) # False
# 用作装饰器
@fluid.dygraph.no_grad
@fluid.no_grad()
def test_layer():
with fluid.dygraph.guard():
inp = np.ones([3, 1024], dtype='float32')
t = fluid.dygraph.base.to_variable(inp)
linear1 = fluid.Linear(1024, 4, bias_attr=False)
......
......@@ -6,19 +6,23 @@ to_variable
.. py:function:: paddle.fluid.dygraph.to_variable(value, name=None, zero_copy=None)
:api_attr: 命令式编程模式(动态图)
该函数实现从numpy\.ndarray对象或者Variable对象创建一个 ``Variable`` 类型的对象。
该函数实现从tuple、list、numpy\.ndarray、Variable、ComplexVariable 对象创建一个 ``Variable`` 类型的对象。
参数:
- **value** (ndarray|Variable) – 需要转换的numpy\.ndarray或Variable对象,维度可以为多维,数据类型为numpy\.{float16, float32, float64, int16, int32, int64, uint8, uint16}中的一种。
- **value** (tuple|list|ndarray|Variable|Tensor|ComplexVariable) – 初始化的数据。可以是tuple、list、numpy\.ndarray、Variable、ComplexVariable。
维度可以为多维,数据类型为numpy\.{float16, float32, float64, int16, int32, int64, uint8, uint16}中的一种。
- **name** (str, 可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
- **zero_copy** (bool, 可选) – 是否与输入的numpy数组共享内存。此参数仅适用于CPUPlace,当它为None时将设置为True。默认值为None。
- **dtype** (str, 可选) - 返回的 ``Variable`` 所需的数据类型。可以是 'bool','float16','float32','float64','int8','int16','int32','int64','uint8'。默认值: None。
返回:如果 ``value`` 是numpy\.ndarray对象,返回由numpy\.ndarray对象创建的 ``Tensor`` ,其数据类型和维度与 ``value`` 一致;如果 ``value`` 是Variable对象,返回 ``value`` 。
返回:如果 ``value`` 是tuple/list/numpy\.ndarray对象,返回对应numpy\.ndarray对象创建的 ``Tensor`` ;如果 ``value`` 是Variable对象,直接返回 ``value`` 。
返回类型:Variable
......@@ -28,13 +32,25 @@ to_variable
import numpy as np
import paddle.fluid as fluid
with fluid.dygraph.guard(fluid.CPUPlace()):
x = np.ones([2, 2], np.float32)
y = fluid.dygraph.to_variable(x, zero_copy=False)
x[0][0] = -1
y[0][0].numpy() # array([1.], dtype=float32)
y = fluid.dygraph.to_variable(x)
x[0][0] = 0
y[0][0].numpy() # array([0.], dtype=float32)
c = np.array([2+1j, 2])
z = fluid.dygraph.to_variable(c)
z.numpy() # array([2.+1.j, 2.+0.j])
z.dtype # 'complex128'
y = fluid.dygraph.to_variable([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
y.shape # [3L, 2L]
y = fluid.dygraph.to_variable(((0.1, 1.2), (2.2, 3.1), (4.9, 5.2)), dtype='int32')
y.shape # [3L, 2L]
y.dtype # core.VarDesc.VarType.INT32
......@@ -39,7 +39,7 @@ Executor支持单GPU、多GPU以及CPU运行。
train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
data = fluid.layers.data(name='X', shape=[1], dtype='float32')
data = fluid.data(name='X', shape=[None, 1], dtype='float32')
hidden = fluid.layers.fc(input=data, size=10)
loss = fluid.layers.mean(hidden)
fluid.optimizer.SGD(learning_rate=0.01).minimize(loss)
......@@ -95,7 +95,7 @@ Executor支持单GPU、多GPU以及CPU运行。
exe.close()
.. py:method:: run(program=None, feed=None, fetch_list=None, feed_var_name='feed', fetch_var_name='fetch', scope=None, return_numpy=True,use_program_cache=False)
.. py:method:: run(program=None, feed=None, fetch_list=None, feed_var_name='feed', fetch_var_name='fetch', scope=None, return_numpy=True, use_program_cache=False, use_prune=False)
执行指定的Program或者CompiledProgram。需要注意的是,执行器会执行Program或CompiledProgram中的所有算子,而不会根据fetch_list对Program或CompiledProgram中的算子进行裁剪。同时,需要传入运行该模型用到的scope,如果没有指定scope,执行器将使用全局scope,即fluid.global_scope()。
......@@ -130,7 +130,7 @@ Executor支持单GPU、多GPU以及CPU运行。
place = fluid.CPUPlace() # fluid.CUDAPlace(0)
exe = fluid.Executor(place)
data = fluid.layers.data(name='X', shape=[1], dtype='float32')
data = fluid.data(name='X', shape=[None, 1], dtype='float32')
hidden = fluid.layers.fc(input=data, size=10)
loss = fluid.layers.mean(hidden)
adam = fluid.optimizer.Adam()
......@@ -175,8 +175,8 @@ train_from_dataset可以非常容易扩展到大规模分布式在线和离线
place = fluid.CPUPlace() # 通过设置place = fluid.CUDAPlace(0)使用GPU
exe = fluid.Executor(place)
x = fluid.layers.data(name="x", shape=[10, 10], dtype="int64")
y = fluid.layers.data(name="y", shape=[1], dtype="int64", lod_level=1)
x = fluid.data(name="x", shape=[None, 10, 10], dtype="int64")
y = fluid.data(name="y", shape=[None, 1], dtype="int64", lod_level=1)
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_use_var([x, y])
dataset.set_thread(1)
......@@ -210,12 +210,13 @@ train_from_dataset可以非常容易扩展到大规模分布式在线和离线
import paddle.fluid as fluid
place = fluid.CPUPlace() # 使用GPU时可设置place = fluid.CUDAPlace(0)
exe = fluid.Executor(place)
x = fluid.layers.data(name="x", shape=[10, 10], dtype="int64")
y = fluid.layers.data(name="y", shape=[1], dtype="int64", lod_level=1)
x = fluid.data(name="x", shape=[None, 10, 10], dtype="int64")
y = fluid.data(name="y", shape=[None, 1], dtype="int64", lod_level=1)
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_use_var([x, y])
dataset.set_thread(1)
filelist = [] # 您可以设置您自己的filelist,如filelist = ["dataA.txt"]
dataset.set_filelist(filelist)
exe.run(fluid.default_startup_program())
exe.infer_from_dataset(program=fluid.default_main_program(),dataset=dataset)
exe.infer_from_dataset(program=fluid.default_main_program(),
dataset=dataset)
......@@ -52,6 +52,7 @@ fluid
fluid_cn/save_cn.rst
fluid_cn/scope_guard_cn.rst
fluid_cn/set_flags_cn.rst
fluid_cn/set_global_initializer_cn.rst
fluid_cn/Tensor_cn.rst
fluid_cn/Variable_cn.rst
fluid_cn/WeightNormParamAttr_cn.rst
.. _cn_api_fluid_set_global_initializer:
set_global_initializer
-------------------------------
.. py:function:: paddle.fluid.set_global_initializer(weight_init, bias_init=None)
该API用于设置Paddle框架中全局的参数初始化方法。该API只对位于其后的代码生效。
模型参数为模型中的weight和bias统称,在fluid中对应fluid.Parameter类,继承自fluid.Variable,是一种可持久化的variable。
该API的设置仅对模型参数生效,对通过 :ref:`cn_api_fluid_layers_create_global_var` 、 :ref:`cn_api_fluid_layers_create_tensor` 等API创建的变量不会生效。
如果创建网络层时还通过 ``param_attr`` 、 ``bias_attr`` 设置了初始化方式,这里的全局设置将不会生效,因为其优先级更低。
参数:
- **weight_init** (Initializer) - 设置框架的全局的weight参数初始化方法。
- **bias_init** (Initializer,可选) - 设置框架的全局的bias参数初始化方法。默认:None。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
fluid.set_global_initializer(fluid.initializer.Uniform(), fluid.initializer.Constant())
x = fluid.data(name="x", shape=[1, 3, 32, 32])
# conv1的weight参数是通过Uniform来初始化
# conv1的bias参数是通过Constant来初始化
conv1 = fluid.layers.conv2d(x, 5, 3)
# 如果同时设置了param_attr/bias_attr, 全局初始化将不会生效
# conv2的weight参数是通过Xavier来初始化
# conv2的bias参数是通过Normal来初始化
conv2 = fluid.layers.conv2d(conv1, 5, 3,
param_attr=fluid.initializer.Xavier(),
bias_attr=fluid.initializer.Normal())
# 取消全局参数初始化的设置
fluid.set_global_initializer(None)
\ No newline at end of file
......@@ -13,6 +13,7 @@ paddle.imperative
imperative_cn/grad_cn.rst
imperative_cn/guard_cn.rst
imperative_cn/InverseTimeDecay_cn.rst
imperative_cn/jit_cn.rst
imperative_cn/load_cn.rst
imperative_cn/load_dygraph_cn.rst
imperative_cn/NaturalExpDecay_cn.rst
......@@ -27,3 +28,4 @@ paddle.imperative
imperative_cn/save_dygraph_cn.rst
imperative_cn/to_variable_cn.rst
imperative_cn/TracedLayer_cn.rst
imperative_cn/TranslatedLayer_cn.rst
.. _cn_api_imperative_TranslatedLayer:
TranslatedLayer
-------------------------------
:doc_source: paddle.fluid.dygraph.io.TranslatedLayer
===
jit
===
.. toctree::
:maxdepth: 1
jit_cn/save_cn.rst
jit_cn/load_cn.rst
jit_cn/SaveLoadConfig_cn.rst
\ No newline at end of file
.. _cn_api_imperative_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.SaveLoadConfig
\ No newline at end of file
.. _cn_api_imperative_jit_load:
load
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.load
\ No newline at end of file
.. _cn_api_imperative_jit_save:
save
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.save
\ No newline at end of file
......@@ -107,3 +107,21 @@ Note。
io_cn.rst
utils_cn.rst
incubate_cn.rst
fluid_cn.rst
backward_cn.rst
clip_cn.rst
data_cn/data_reader_cn.rst
data_cn/dataset_cn.rst
dataset_cn.rst
dygraph_cn.rst
executor_cn.rst
initializer_cn.rst
io_cn.rst
layers_cn.rst
metrics_cn.rst
nets_cn.rst
optimizer_cn.rst
profiler_cn.rst
regularizer_cn.rst
transpiler_cn.rst
unique_name_cn.rst
......@@ -11,7 +11,7 @@ MSRAInitializer
该接口实现MSRA方式的权重初始化(a.k.a. Kaiming初始化)
该接口为权重初始化函数,方法来自Kaiming He,Xiangyu Zhang,Shaoqing Ren 和 Jian Sun所写的论文: `Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification <https://arxiv.org/abs/1502.01852>`_ 。这是一个鲁棒性特别强的初始化方法,并且适应了非线性激活函数(rectifier nonlinearities)。
可以选择使用均匀分布或者正分布初始化权重;
可以选择使用均匀分布或者正分布初始化权重;
在均匀分布中,范围为[-x,x],其中:
.. math::
......
......@@ -22,8 +22,9 @@ NumpyArrayInitializer
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[5], dtype='float32')
fc = fluid.layers.fc(input=x, size=10,
import numpy
x1 = fluid.data(name="x1", shape=[2, 1], dtype='float32')
fc = fluid.layers.fc(input=x1, size=10,
param_attr=fluid.initializer.NumpyArrayInitializer(numpy.array([1,2])))
......@@ -11,23 +11,29 @@ abs
绝对值激活函数。
绝对值函数。
.. math::
out = |x|
参数:
- **x** (Variable)- 多维Tensor,数据类型为float32或float64。
- **name** (str) – 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None
- x (Tensor) - 输入的Tensor,数据类型为:float32、float64。
- name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`
返回:表示绝对值结果的Tensor,数据类型与x相同。
返回:输出Tensor,与 ``x`` 维度相同、数据类型相同。
返回类型:Variable
返回类型:Tensor
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.abs(data)
import paddle
import numpy as np
paddle.disable_static()
x_data = np.array([-1, -2, -3, -4]).astype(np.float32)
x = paddle.to_variable(x_data)
res = paddle.abs(x)
print(res.numpy())
# [1, 2, 3, 4]
......@@ -11,29 +11,30 @@ acos
arccosine激活函数。
arccosine函数。
.. math::
out = cos^{-1}(x)
参数:
- **x(Variable)** - acos的输入Tensor,数据类型为 float32 或 float64
- **name** (str|None) – 具体用法请参见 :ref:`cn_api_guide_Name` ,一般无需设置,默认值为None。
返回: `acos` 的输出Tensor,数据类型与 `x` 相同。
- x (Tensor) - 输入的Tensor,数据类型为:float32、float64。
- name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回类型: Variable
返回:输出Tensor,与 ``x`` 维度相同、数据类型相同。
返回类型: Tensor
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[4])
# if data is [-0.8183, 0.4912, -0.6444, 0.0371]
result = fluid.layers.acos(data)
# result is [2.5293, 1.0573, 2.2711, 1.5336]
import paddle
import numpy as np
paddle.disable_static()
x_data = np.array([-0.8183, 0.4912, -0.6444, 0.0371]).astype(np.float32)
x = paddle.to_variable(x_data)
res = paddle.acos(x)
print(res.numpy())
# [2.5293, 1.0573, 2.2711, 1.5336]
......@@ -11,29 +11,29 @@ asin
arcsine激活函数。
arcsine函数。
.. math::
out = sin^{-1}(x)
参数:
- **x(Variable)** - asin的输入Tensor,数据类型为 float32 或 float64
- **name** (str|None) – 具体用法请参见 :ref:`cn_api_guide_Name` ,一般无需设置,默认值为None
- x (Tensor) - 输入的Tensor,数据类型为:float32、float64。
- name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`
返回: `asin` 的输出Tensor,数据类型与 `x` 相同。
返回:输出Tensor,与 ``x`` 维度相同、数据类型相同。
返回类型: Variable
返回类型: Tensor
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[4])
# if data is [-0.8183, 0.4912, -0.6444, 0.0371]
result = fluid.layers.asin(data)
# result is [-0.9585, 0.5135, -0.7003, 0.0372]
import paddle
import numpy as np
paddle.disable_static()
x_data = np.array([-0.8183, 0.4912, -0.6444, 0.0371]).astype(np.float32)
x = paddle.to_variable(x_data)
res = paddle.asin(x)
print(res.numpy())
# [-0.9585, 0.5135, -0.7003, 0.0372]
......@@ -11,30 +11,29 @@ atan
arctanh激活函数。
arctangent函数。
.. math::
out = tanh^{-1}(x)
out = tan^{-1}(x)
参数:
- **x(Variable)** - atan的输入Tensor,数据类型为 float32 或 float64
- **name** (str|None) – 具体用法请参见 :ref:`cn_api_guide_Name` ,一般无需设置,默认值为None
- x (Tensor) - 输入的Tensor,数据类型为:float32、float64。
- name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`
返回: `atan` 的输出Tensor,数据类型与 `x` 相同。
返回:输出Tensor,与 ``x`` 维度相同、数据类型相同。
返回类型: Variable
返回类型: Tensor
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[4])
# if data is [-0.8183, 0.4912, -0.6444, 0.0371]
result = fluid.layers.atan(data)
# result is [-0.6858, 0.4566, -0.5724, 0.0371]
import paddle
import numpy as np
paddle.disable_static()
x_data = np.array([-0.8183, 0.4912, -0.6444, 0.0371]).astype(np.float32)
x = paddle.to_variable(x_data)
res = paddle.atan(x)
print(res.numpy())
# [-0.6858, 0.4566, -0.5724, 0.0371]
......@@ -19,24 +19,24 @@ ceil
参数:
- **x** (Variable) - 该OP的输入为多维Tensor。数据类型为float32或float64。
- **name** (str, 可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为None
- x (Tensor) - 输入的Tensor,数据类型为:float32、float64。
- name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`
返回: 输出为Tensor,与 ``x`` 维度相同、数据类型相同。
返回:输出Tensor,与 ``x`` 维度相同、数据类型相同。
返回类型: Variable
返回类型: Tensor
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
input_ceil = np.array([[-1.5,6],[1,15.6]])
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(input_ceil)
y = fluid.layers.ceil(x)
print(y.numpy())
paddle.disable_static()
x_data = np.array([[-1.5,6],[1,15.6]]).astype(np.float32)
x = paddle.to_variable(x_data)
res = paddle.ceil(x)
print(res.numpy())
# [[-1. 6.]
# [ 1. 16.]]
......@@ -3,24 +3,24 @@
concat
-------------------------------
.. py:function:: paddle.fluid.layers.concat(input,axis=0,name=None)
.. py:function:: paddle.fluid.layers.concat(input, axis=0, name=None)
:alias_main: paddle.concat
:alias: paddle.concat,paddle.tensor.concat,paddle.tensor.manipulation.concat
:old_api: paddle.fluid.layers.concat
该OP对输入沿 ``axis`` 轴进行联结。
该OP对输入沿 ``axis`` 轴进行联结,返回一个新的Tensor。
参数:
- **input** (list) - 输入是待联结的多维 ``Tensor`` 组成的 ``list`` ,支持的数据类型为:float32、float64、int32、int64
- **axis** (int|Variable,可选) - 整数或者形状为[1]的 ``Tensor``,数据类型为 ``int32``。指定对输入Tensor进行运算的轴, ``axis`` 的有效范围是[-R, R),R是输入 ``input`` 中 ``Tensor`` 的维度, ``axis`` 为负值时与 :math:`axis + R` 等价。默认值为0。
- **input** (list|tuple|Tensor) - 待联结的Tensor list,Tensor tuple或者Tensor,支持的数据类型为:bool、float16、 float32、float64、int32、int64。 ``input`` 中所有Tensor的数据类型必须一致
- **axis** (int|Tensor,可选) - 指定对输入Tensor进行运算的轴,可以是整数或者形状为[1]的Tensor,数据类型为int32或者int64。 ``axis`` 的有效范围是[-R, R),R是输入 ``input`` 中Tensor 的维度, ``axis`` 为负值时与 :math:`axis + R` 等价。默认值为0。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:联结后的 ``Tensor`` ,数据类型和 ``input`` 相同。
返回:联结后的 ``Tensor`` ,数据类型和 ``input`` 中的Tensor相同。
返回类型:Variable
抛出异常:
- ``TypeError``: - 当输入 ``input`` 的类型不是list、tuple或者Tensor的时候。
- ``TypeError``: - 当输入 ``input`` 的数据类型不是 bool,float16, float32, float64, int32, int64时。
- ``TypeError``: - 当 ``axis`` 的类型不是int或者Tensor时。当 ``axis`` 是Tensor的时候其数据类型不是int32或者int64时。
- ``TypeError``: - 当输入 ``input`` 中的Tensor存在数据类型不一致时。
**代码示例**:
......@@ -29,18 +29,18 @@ concat
import paddle.fluid as fluid
import numpy as np
in1 = np.array([[1,2,3],
[4,5,6]])
in2 = np.array([[11,12,13],
[14,15,16]])
in3 = np.array([[21,22],
[23,24]])
in1 = np.array([[1, 2, 3],
[4, 5, 6]])
in2 = np.array([[11, 12, 13],
[14, 15, 16]])
in3 = np.array([[21, 22],
[23, 24]])
with fluid.dygraph.guard():
x1 = fluid.dygraph.to_variable(in1)
x2 = fluid.dygraph.to_variable(in2)
x3 = fluid.dygraph.to_variable(in3)
out1 = fluid.layers.concat(input=[x1,x2,x3], axis=-1)
out2 = fluid.layers.concat(input=[x1,x2], axis=0)
out1 = fluid.layers.concat(input=[x1, x2, x3], axis=-1)
out2 = fluid.layers.concat(input=[x1, x2], axis=0)
print(out1.numpy())
# [[ 1 2 3 11 12 13 21 22]
# [ 4 5 6 14 15 16 23 24]]
......
......@@ -13,32 +13,31 @@ cos
余弦函数。
输入范围是 `(-inf, inf)` , 输出范围是 `[-1,1]`。若输入超出边界则结果为`nan`。
.. math::
out = cos(x)
参数:
- **x** (Variable) - 该OP的输入为多维Tensor,数据类型为float32,float64。
- **name** (str, 可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为None。
- x (Tensor) - 输入的Tensor,数据类型为:float32、float64。
- name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回:输出Tensor,与 ``x`` 维度相同、数据类型相同。
返回:输出Tensor,与 ``x`` 维度相同、数据类型相同。
返回类型:Variable
返回类型:Tensor
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
input_cos = np.array([[-1,np.pi],[1,15.6]])
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(input_cos)
y = fluid.layers.cos(x)
print(y.numpy())
paddle.disable_static()
x_data = np.array([[-1,np.pi],[1,15.6]]).astype(np.float32)
x = paddle.to_variable(x_data)
res = paddle.cos(x)
print(res.numpy())
# [[ 0.54030231 -1. ]
# [ 0.54030231 -0.99417763]]
......@@ -104,7 +104,7 @@ crop_tensor
# crop3.shape = [-1, 2, 3]
# offsets is a list in which each element is a constant or Tensor
offsets_var = fluid.data(name="dim1", shape=[1], dtype="int32")
offsets_var = fluid.data(name="offset", shape=[1], dtype="int32")
crop4 = fluid.layers.crop_tensor(x, shape=[-1, 2, 3], offsets=[0, 1, offsets_var])
# crop4.shape = [-1, 2, 3]
......@@ -5,11 +5,6 @@ cumsum
.. py:function:: paddle.fluid.layers.cumsum(x,axis=None,exclusive=None,reverse=None)
:alias_main: paddle.cumsum
:alias: paddle.cumsum,paddle.tensor.cumsum,paddle.tensor.math.cumsum
:old_api: paddle.fluid.layers.cumsum
沿给定轴(axis)的元素的累加和。默认结果的第一个元素和输入的第一个元素一致。如果exlusive为True,结果的第一个元素则为0。
......
......@@ -48,7 +48,7 @@ deformable_roi_pooling
.. code-block:: python
#position_sensitiveFalse
#position_sensitive=False
import paddle.fluid as fluid
input = fluid.data(name="input",
......@@ -74,7 +74,7 @@ deformable_roi_pooling
trans_std=0.1,
position_sensitive=False)
#position_sensitiveTrue
#position_sensitive=True
import paddle.fluid as fluid
input = fluid.data(name="input",
......
......@@ -28,12 +28,12 @@ double_buffer
.. code-block:: python
import paddle.fluid as fluid
reader = fluid.layers.open_files(filenames=['mnist.recordio'],
shapes=[[-1, 784], [-1, 1]],
lod_levels=[0, 0],
dtypes=['float32', 'int64'])
reader = fluid.layers.py_reader(capacity=64,
shapes=[(-1, 1, 28, 28), (-1, 1)],
dtypes=['float32', 'int64'],
use_double_buffer=False)
reader = fluid.layers.double_buffer(reader)
img, label = fluid.layers.read_file(reader)
image, label = fluid.layers.read_file(reader)
......
......@@ -107,7 +107,7 @@ elementwise_add
"y": np.random.randint(1, 5, size=[5]).astype('float32')
}
x = fluid.layers.data(name="x", shape=[2,3,4,5], dtype='float32')
y = fluid.layers.data(name="y", shape=[3,4], dtype='float32')
y = fluid.layers.data(name="y", shape=[5], dtype='float32')
# z = x + y
z = fluid.layers.elementwise_add(x, y, axis=3)
place = fluid.CPUPlace()
......
......@@ -107,7 +107,7 @@ elementwise_div
"y": np.random.randint(1, 5, size=[5]).astype('float32')
}
x = fluid.layers.data(name="x", shape=[2,3,4,5], dtype='float32')
y = fluid.layers.data(name="y", shape=[3,4], dtype='float32')
y = fluid.layers.data(name="y", shape=[5], dtype='float32')
z = fluid.layers.elementwise_div(x, y, axis=3)
# z = x / y
place = fluid.CPUPlace()
......
......@@ -107,7 +107,7 @@ elementwise_sub
"y": np.random.randint(1, 5, size=[5]).astype('float32')
}
x = fluid.layers.data(name="x", shape=[2,3,4,5], dtype='float32')
y = fluid.layers.data(name="y", shape=[3,4], dtype='float32')
y = fluid.layers.data(name="y", shape=[5], dtype='float32')
z = fluid.layers.elementwise_sub(x, y, axis=3)
# z = x - y
place = fluid.CPUPlace()
......
......@@ -3,9 +3,7 @@
equal
-------------------------------
.. py:function:: paddle.fluid.layers.equal(x,y,cond=None)
.. py:function:: paddle.fluid.layers.equal(x, y, cond=None, name=None)
该OP返回 :math:`x==y` 逐元素比较x和y是否相等,x和y的维度应该相同。
......@@ -13,7 +11,8 @@ equal
参数:
- **x** (Variable) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Variable) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **cond** (Variable,可选) - 逐元素比较的结果Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **cond** (Variable,可选) – 如果为None,则创建一个Tensor来作为进行比较的输出结果,该Tensor的shape和数据类型和输入x一致;如果不为None,则将Tensor作为该OP的输出,数据类型和数据shape需要和输入x一致。默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
......
......@@ -3,25 +3,24 @@
eye
-------------------------------
.. py:function:: paddle.fluid.layers.eye(num_rows, num_columns=None, batch_shape=None, dtype='float32')
.. py:function:: paddle.fluid.layers.eye(num_rows, num_columns=None, batch_shape=None, dtype='float32', name=None)
:alias_main: paddle.eye
:alias: paddle.eye,paddle.tensor.eye,paddle.tensor.creation.eye
:update_api: paddle.fluid.layers.eye
该OP用来构建单位矩阵,或一个批次的单位矩阵。
该OP用来构建二维Tensor,或一个批次的二维Tensor。
参数:
- **num_rows** (int) - 每一个批矩阵的行数,数据类型为非负int32。
- **num_columns** (int) - 每一个批矩阵的列数,数据类型为非负int32。若为None,则默认等于num_rows。
- **batch_shape** (list(int)) - 如若提供,则返回向量的主批次维度将为batch_shape。
- **dtype** (string) - 返回张量的数据类型,可为int32,int64,float16,float32,float64。
- **num_rows** (int) - 该批次二维Tensor的行数,数据类型为非负int32。
- **num_columns** (int, 可选) - 该批次二维Tensor的列数,数据类型为非负int32。若为None,则默认等于num_rows。
- **batch_shape** (list(int), 可选) - 如若提供,则返回Tensor的主批次维度将为batch_shape。
- **dtype** (np.dtype|core.VarDesc.VarType|str,可选) - 返回Tensor的数据类型,可为int32,int64,float16,float32,float64,默认数据类型为float32。
- **name** (str) – 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None。
返回: ``shape`` 为batch_shape + [num_rows, num_columns]的Tensor。
返回:shape为batch_shape + [num_rows, num_columns]的张量。
返回类型:Variable(Tensor|LoDTensor)数据类型为int32,int64,float16,float32,float64的Tensor或者LoDTensor。
抛出异常:
- ``TypeError``: - 如果 ``dtype`` 的类型不是float16, float32, float64, int32, int64其中之一。
- ``TypeError``: - 如果 ``num_columns`` 不是非负整数或者 ``num_rows`` 不是非负整数。
**代码示例**:
......
......@@ -28,7 +28,8 @@ fill_constant
返回类型:变量(Variable)
抛出异常:
- :code:`TypeError`: dtype必须是bool,float16,float32,float64,int32和int64之一,并且输出Tensor的数据类型必须与dtype相同。
- :code:`TypeError`: dtype必须是bool,float16,float32,float64,int32和int64之一,输出Tensor的数据类型必须与dtype相同。
- :code:`TypeError`: 当 `shape` 的数据类型不是list、tuple、Variable。
**代码示例**:
......@@ -43,6 +44,6 @@ fill_constant
positive_2 = fluid.layers.fill_constant([1], "int32", 2)
data3 = fluid.layers.fill_constant(shape=[1, positive_2], dtype='float32', value=1.5) # data3=[1.5, 1.5]
# attr shape is an Variable Tensor.
# attr shape is a Variable Tensor.
shape = fluid.layers.fill_constant([1,2], "int32", 2) # shape=[2,2]
data4 = fluid.layers.fill_constant(shape=shape, dtype='bool', value=True) # data4=[[True,True],[True,True]]
......@@ -3,30 +3,29 @@
gaussian_random
-------------------------------
.. py:function:: paddle.fluid.layers.gaussian_random(shape, mean=0.0, std=1.0, seed=0, dtype='float32')
.. py:function:: paddle.fluid.layers.gaussian_random(shape, mean=0.0, std=1.0, seed=0, dtype='float32', name=None)
生成数据符合高斯随机分布的 Tensor
该OP返回数值符合高斯随机分布的Tensor,形状为 ``shape``,数据类型为 ``dtype``
参数:
- **shape** (Tuple[int] | List[int])- 生成 Tensor 的形状。
- **mean** (float)- 随机 Tensor 的均值,默认值为 0.0。
- **std** (float)- 随机 Tensor 的标准差,默认值为 1.0。
- **seed** (int)- 随机数种子,默认值为 0。注:seed 设置为 0 表示使用系统的随机数种子。注意如果 seed 不为 0,则此算子每次将始终生成相同的随机数。
- **dtype** (np.dtype | core.VarDesc.VarType | str)- 输出 Tensor 的数据类型,可选值为 float32,float64。
- **shape** (list|tuple|Tensor) - 生成的随机Tensor的形状。如果 ``shape`` 是list、tuple,则其中的元素可以是int,或者是形状为[1]且数据类型为int32、int64的Tensor。如果 ``shape`` 是Tensor,则是数据类型为int32、int64的1-D Tensor。
- **mean** (float|int, 可选) - 输出Tensor的均值,支持的数据类型:float、int。默认值为0.0。
- **std** (float|int, 可选) - 输出Tensor的标准差,支持的数据类型:float、int。默认值为1.0。
- **seed** (int, 可选) - 随机数种子,默认值为 0。注:seed 设置为 0 表示使用系统的随机数种子。注意如果 seed 不为 0,则此算子每次将始终生成相同的随机数。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持float32、float64。默认值为float32。
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回:
Tensor:符合高斯随机分布的Tensor,形状为 ``shape``,数据类型为 ``dtype``。
- 符合高斯分布的随机 Tensor。形状为 shape,数据类型为 dtype。
抛出异常:
- ``TypeError`` - 如果 ``shape`` 的类型不是list、tuple、Tensor。
- ``TypeError`` - 如果 ``dtype`` 不是float32、float64。
返回类型:
- Variable
**代码示例:**
**代码示例**:
.. code-block:: python
......
......@@ -3,7 +3,7 @@
greater_equal
-------------------------------
.. py:function:: paddle.fluid.layers.greater_equal(x, y, cond=None)
.. py:function:: paddle.fluid.layers.greater_equal(x, y, cond=None, name=None)
:alias_main: paddle.greater_equal
:alias: paddle.greater_equal,paddle.tensor.greater_equal,paddle.tensor.logic.greater_equal
......@@ -18,6 +18,7 @@ greater_equal
- **x** (Variable) – 进行比较的第一个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **y** (Variable) – 进行比较的第二个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **cond** (Variable,可选) – 如果为None,则创建一个Tensor来作为进行比较的输出结果,该Tensor的shape,数据类型和输入x一致;如果不为None,则将Tensor作为该OP的输出,数据shape和数据类型需要和输入x一致。默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,数据的shape和输入x一致。
......
......@@ -3,7 +3,7 @@
greater_than
-------------------------------
.. py:function:: paddle.fluid.layers.greater_than(x, y, cond=None)
.. py:function:: paddle.fluid.layers.greater_than(x, y, cond=None, name=None)
:alias_main: paddle.greater_than
:alias: paddle.greater_than,paddle.tensor.greater_than,paddle.tensor.logic.greater_than
......@@ -17,6 +17,7 @@ greater_than
- **x** (Variable) – 进行比较的第一个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **y** (Variable) – 进行比较的第二个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **cond** (Variable,可选) – 如果为None,则创建一个Tensor来作为进行比较的输出结果,该Tensor的shape和数据类型和输入x一致;如果不为None,则将Tensor作为该OP的输出,数据类型和数据shape需要和输入x一致。默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,数据的shape和输入x一致。
......
......@@ -36,8 +36,8 @@ hash
place = fluid.core.CPUPlace()
# 构建网络
x = fluid.layers.data(name="x", shape=[1], dtype="int32", lod_level=1)
res = fluid.layers.hash(name="res",input=x, hash_size=1000, num_hash=4)
x = fluid.data(name="x", shape=[2, 2], dtype="int32", lod_level=1)
res = fluid.layers.hash(name="res", input=x, hash_size=1000, num_hash=4)
# 创建CPU执行器
exe = fluid.Executor(place)
......@@ -45,9 +45,7 @@ hash
in1 = np.array([[1,2],[3,4]]).astype("int32")
print(in1)
x_i = fluid.core.LoDTensor()
x_i.set(in1,place)
x_i.set_recursive_sequence_lengths([[0,2]])
x_i = fluid.create_lod_tensor(in1, [[0, 2]], place)
res = exe.run(fluid.default_main_program(), feed={'x':x_i}, fetch_list=[res], return_numpy=False)
print(np.array(res[0]))
# [[[722]
......
......@@ -3,7 +3,7 @@
less_equal
-------------------------------
.. py:function:: paddle.fluid.layers.less_equal(x, y, cond=None)
.. py:function:: paddle.fluid.layers.less_equal(x, y, cond=None, name=None)
:alias_main: paddle.less_equal
:alias: paddle.less_equal,paddle.tensor.less_equal,paddle.tensor.logic.less_equal
......@@ -17,6 +17,7 @@ less_equal
- **x** (Variable) – 进行比较的第一个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **y** (Variable) – 进行比较的第二个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **cond** (Variable,可选) – 如果为None,则创建一个Tensor来作为进行比较的输出结果,该Tensor的shape和数据类型和输入x一致;如果不为None,则将Tensor作为该OP的输出,数据类型和数据shape需要和输入x一致。默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,数据的shape和输入x一致。
......
......@@ -3,7 +3,7 @@
less_than
-------------------------------
.. py:function:: paddle.fluid.layers.less_than(x, y, force_cpu=None, cond=None)
.. py:function:: paddle.fluid.layers.less_than(x, y, force_cpu=None, cond=None, name=None)
:alias_main: paddle.less_than
:alias: paddle.less_than,paddle.tensor.less_than,paddle.tensor.logic.less_than
......@@ -20,6 +20,7 @@ less_than
- **y** (Variable) - 进行比较的第二个输入,是一个多维的LoDTensor/Tensor,数据类型可以是float32,float64,int32,int64。
- **force_cpu** (bool) – 如果为True则强制将输出变量写入CPU内存中,否则将其写入目前所在的运算设备上。默认值为False。注意:该属性已弃用,其值始终是False。
- **cond** (Variable,可选) – 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的LoDTensor/Tensor,数据的shape和输入x一致。
......
......@@ -3,22 +3,27 @@
linspace
-------------------------------
.. py:function:: paddle.fluid.layers.linspace(start, stop, num, dtype)
.. py:function:: paddle.fluid.layers.linspace(start, stop, num, dtype=None, name=None)
该OP在给定区间内返回固定数目的均匀间隔的值。
该OP返回一个Tensor,Tensor的值为在区间start和stop上均匀间隔的num个值,输出Tensor的长度为num。
**注意:该OP不进行梯度计算**
参数:
- **start** (float|Variable) – start是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **stop** (float|Variable) – end是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **num** (int|Variable) – num是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。
- **dtype** (string) – 输出Tensor的数据类型,可以是‘float32’或者是‘float64’。
- **start** (float|Tensor) – ``start`` 是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **stop** (float|Tensor) – ``end`` 是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **num** (int|Tensor) – ``num`` 是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。
- **dtype** (string, 可选) – 输出Tensor的数据类型,可以是float32或者是float64,如果dtype的数据类型为None,输出Tensor数据类型为float32。
- **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:表示等间隔划分结果的1-D Tensor,该Tensor的shape大小为 :math:`[num]` ,在mum为1的情况下,仅返回包含start元素值的Tensor。
返回类型:Variable
抛出异常:
- ``TypeError`` - 当start或者stop的数据类型不是float32或者float64。
- ``TypeError`` - 当num的数据类型不是float32或者float64。
- ``TypeError`` - 当dtype的类型不是float32或者float64。
**代码示例**:
......
......@@ -3,26 +3,26 @@
logical_and
-------------------------------
.. py:function:: paddle.fluid.layers.logical_and(x, y, out=None, name=None)
.. py:function:: paddle.logical_and(x, y, out=None, name=None)
:alias_main: paddle.logical_and
:alias: paddle.logical_and,paddle.tensor.logical_and,paddle.tensor.logic.logical_and
:alias: paddle.logical_and, paddle.tensor.logical_and, paddle.tensor.logic.logical_and
:old_api: paddle.fluid.layers.logical_and
该OP逐元素的对 ``X`` 和 ``Y`` 两LoDTensor/Tensor进行逻辑与运算。
该OP逐元素的对 ``x`` 和 ``y`` 进行逻辑与运算。
.. math::
Out = X \&\& Y
参数:
- **x** (Variable)- 逻辑与运算的第一个输入,是一个多维的LoDTensor/Tensor,数据类型只能是bool。
- **y** (Variable)- 逻辑与运算的第二个输入,是一个多维的LoDTensor/Tensor,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **x** (Variable)- 逻辑与运算的第一个输入,是一个 Variable,数据类型只能是bool。
- **y** (Variable)- 逻辑与运算的第二个输入,是一个 Variable,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的 Variable,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **name** (str,可选)- 该参数供开发人员打印调试信息时使用,具体用法参见 :ref:`api_guide_Name` ,默认值为None。
返回:与 ``x`` 维度相同,数据类型相同的LoDTensor/Tensor
返回:与 ``x`` 维度相同,数据类型相同的 Variable
返回类型:Variable
......@@ -31,24 +31,13 @@ logical_and
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
# Graph organizing
x = fluid.layers.data(name='x', shape=[2], dtype='bool')
y = fluid.layers.data(name='y', shape=[2], dtype='bool')
res = fluid.layers.logical_and(x=x, y=y)
# The comment lists another available method.
# res = fluid.layers.fill_constant(shape=[2], dtype='bool', value=0)
# fluid.layers.logical_and(x=x, y=y, out=res)
# Create an executor using CPU as an example
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
# Execute
x_i = np.array([[1, 0], [0, 1]]).astype(np.bool)
y_i = np.array([[1, 1], [0, 0]]).astype(np.bool)
res_val, = exe.run(fluid.default_main_program(), feed={'x':x_i, 'y':y_i}, fetch_list=[res])
print(res_val) # [[True, False], [False, False]]
paddle.enable_imperative()
x_data = np.array([True, True, False, False], dtype=np.bool)
y_data = np.array([True, False, True, False], dtype=np.bool)
x = paddle.imperative.to_variable(x_data)
y = paddle.imperative.to_variable(y_data)
res = paddle.logical_and(x, y)
print(res.numpy()) # [True False False False]
......@@ -3,25 +3,25 @@
logical_not
-------------------------------
.. py:function:: paddle.fluid.layers.logical_not(x, out=None, name=None)
.. py:function:: paddle.logical_not(x, out=None, name=None)
:alias_main: paddle.logical_not
:alias: paddle.logical_not,paddle.tensor.logical_not,paddle.tensor.logic.logical_not
:alias: paddle.logical_not, paddle.tensor.logical_not, paddle.tensor.logic.logical_not
:old_api: paddle.fluid.layers.logical_not
该OP逐元素的对 ``X`` LoDTensor/Tensor进行逻辑非运算
该OP逐元素的对 ``X`` Variable进行逻辑非运算
.. math::
Out = !X
参数:
- **x** (Variable)- 逻辑非运算的输入,是一个多维的LoDTensor/Tensor,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **x** (Variable)- 逻辑非运算的输入,是一个 Variable,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的 Variable,可以是程序中已经创建的任何 Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **name** (str,可选)- 该参数供开发人员打印调试信息时使用,具体用法参见 :ref:`api_guide_Name` ,默认值为None。
返回:与 ``x`` 维度相同,数据类型相同的LoDTensor/Tensor
返回:与 ``x`` 维度相同,数据类型相同的 Variable
返回类型:Variable
......@@ -29,22 +29,11 @@ logical_not
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
# Graph organizing
x = fluid.layers.data(name='x', shape=[2], dtype='bool')
res = fluid.layers.logical_not(x)
# The comment lists another availble method.
# res = fluid.layers.fill_constant(shape=[2], dtype='bool', value=0)
# fluid.layers.logical_not(x, out=res)
# Create an executor using CPU as an example
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
# Execute
x_i = np.array([[1, 0]]).astype(np.bool)
res_val, = exe.run(fluid.default_main_program(), feed={'x':x_i}, fetch_list=[res])
print(res_val) # [[False, True]]
paddle.enable_imperative()
x_data = np.array([True, False, True, False], dtype=np.bool)
x = paddle.imperative.to_variable(x_data)
res = paddle.logical_not(x)
print(res.numpy()) # [False True False True]
......@@ -3,26 +3,26 @@
logical_or
-------------------------------
.. py:function:: paddle.fluid.layers.logical_or(x, y, out=None, name=None)
.. py:function:: paddle.logical_or(x, y, out=None, name=None)
:alias_main: paddle.logical_or
:alias: paddle.logical_or,paddle.tensor.logical_or,paddle.tensor.logic.logical_or
:alias: paddle.logical_or, paddle.tensor.logical_or, paddle.tensor.logic.logical_or
:old_api: paddle.fluid.layers.logical_or
该OP逐元素的对 ``X`` 和 ``Y`` 两LoDTensor/Tensor进行逻辑或运算。
该OP逐元素的对 ``X`` 和 ``Y`` 进行逻辑或运算。
.. math::
Out = X || Y
参数:
- **x** (Variable)- 逻辑或运算的第一个输入,是一个多维的LoDTensor/Tensor,数据类型只能是bool。
- **y** (Variable)- 逻辑或运算的第二个输入,是一个多维的LoDTensor/Tensor,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **x** (Variable)- 逻辑或运算的第一个输入,是一个 Variable,数据类型只能是bool。
- **y** (Variable)- 逻辑或运算的第二个输入,是一个 Variable,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的 Variable,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **name** (str,可选)- 该参数供开发人员打印调试信息时使用,具体用法参见 :ref:`api_guide_Name` ,默认值为None。
返回:与 ``x`` 维度相同,数据类型相同的LoDTensor/Tensor
返回:与 ``x`` 维度相同,数据类型相同的 Variable
返回类型:Variable
......@@ -31,24 +31,13 @@ logical_or
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
# Graph organizing
x = fluid.layers.data(name='x', shape=[2], dtype='bool')
y = fluid.layers.data(name='y', shape=[2], dtype='bool')
res = fluid.layers.logical_or(x=x, y=y)
# The comment lists another available method.
# res = fluid.layers.fill_constant(shape=[2], dtype='bool', value=0)
# fluid.layers.logical_or(x=x, y=y, out=res)
# Create an executor using CPU as an example
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
# Execute
x_i = np.array([[1, 0], [0, 1]]).astype(np.bool)
y_i = np.array([[1, 1], [0, 0]]).astype(np.bool)
res_val, = exe.run(fluid.default_main_program(), feed={'x':x_i, 'y':y_i}, fetch_list=[res])
print(res_val) # [[True, True], [False, True]]
paddle.enable_imperative()
x_data = np.array([True, True, False, False], dtype=np.bool)
y_data = np.array([True, False, True, False], dtype=np.bool)
x = paddle.imperative.to_variable(x_data)
y = paddle.imperative.to_variable(y_data)
res = paddle.logical_or(x, y)
print(res.numpy()) # [True True True False]
......@@ -3,27 +3,27 @@
logical_xor
-------------------------------
.. py:function:: paddle.fluid.layers.logical_xor(x, y, out=None, name=None)
.. py:function:: paddle.logical_xor(x, y, out=None, name=None)
:alias_main: paddle.logical_xor
:alias: paddle.logical_xor,paddle.tensor.logical_xor,paddle.tensor.logic.logical_xor
:alias: paddle.logical_xor, paddle.tensor.logical_xor, paddle.tensor.logic.logical_xor
:old_api: paddle.fluid.layers.logical_xor
该OP逐元素的对 ``X`` 和 ``Y`` 两LoDTensor/Tensor进行逻辑异或运算。
该OP逐元素的对 ``X`` 和 ``Y`` 进行逻辑异或运算。
.. math::
Out = (X || Y) \&\& !(X \&\& Y)
参数:
- **x** (Variable)- 逻辑异或运算的第一个输入,是一个多维的LoDTensor/Tensor,数据类型只能是bool。
- **y** (Variable)- 逻辑异或运算的第二个输入,是一个多维的LoDTensor/Tensor,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **x** (Variable)- 逻辑异或运算的第一个输入,是一个 Variable,数据类型只能是bool。
- **y** (Variable)- 逻辑异或运算的第二个输入,是一个 Variable,数据类型只能是bool。
- **out** (Variable,可选)- 指定算子输出结果的 Variable,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **name** (str,可选)- 该参数供开发人员打印调试信息时使用,具体用法参见 :ref:`api_guide_Name` ,默认值为None。
返回:与 ``x`` 维度相同,数据类型相同的LoDTensor/Tensor
返回:与 ``x`` 维度相同,数据类型相同的 Variable
返回类型:Variable
......@@ -32,24 +32,13 @@ logical_xor
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
# Graph organizing
x = fluid.layers.data(name='x', shape=[2], dtype='bool')
y = fluid.layers.data(name='y', shape=[2], dtype='bool')
res = fluid.layers.logical_xor(x=x, y=y)
# The comment lists another available method.
# res = fluid.layers.fill_constant(shape=[2], dtype='bool', value=0)
# fluid.layers.logical_xor(x=x, y=y, out=res)
# Create an executor using CPU as an example
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
# Execute
x_i = np.array([[1, 0], [0, 1]]).astype(np.bool)
y_i = np.array([[1, 1], [0, 0]]).astype(np.bool)
res_val, = exe.run(fluid.default_main_program(), feed={'x':x_i, 'y':y_i}, fetch_list=[res])
print(res_val) # [[False, True], [False, True]]
paddle.enable_imperative()
x_data = np.array([True, True, False, False], dtype=np.bool)
y_data = np.array([True, False, True, False], dtype=np.bool)
x = paddle.imperative.to_variable(x_data)
y = paddle.imperative.to_variable(y_data)
res = paddle.logical_xor(x, y)
print(res.numpy()) # [False True True False]
.. _cn_api_fluid_layers_matrix_nms:
matrix_nms
-------------------------------
.. py:function:: paddle.fluid.layers.matrix_nms(bboxes, scores, score_threshold, post_threshold, nms_top_k, keep_top_k, use_gaussian=False, gaussian_sigma=2., background_label=0, normalized=True, return_index=False, name=None)
:alias_main: paddle.nn.functional.matrix_nms
:alias: paddle.nn.functional.matrix_nms,paddle.nn.functional.extension.matrix_nms
:old_api: paddle.fluid.layers.matrix_nms
**Matrix NMS**
该OP使用Matrix NMS算法对边界框(bounding box)和评分(scores)执行多类非极大值抑制(NMS)。
如果提供 ``score_threshold`` 阈值且 ``nms_top_k`` 大于-1,则选择置信度分数最大的k个框。 然后按照Matrix NMS算法对分数进行衰减。经过抑制后,如果 ``keep_top_k`` 大于-1, 则每张图片最终保留 ``keep_top_k`` 个检测框。
在NMS步骤后,如果keep_top_k大于-1,则每个图像最多保留keep_top_k个框(bounding box)。
参数:
- **bboxes** (Variable) - 形为[N,M,4]的3-D张量,表示将预测M个边界框的预测位置, N是批大小(batch size)。当边界框(bounding box)大小等于4时,每个边界框有四个坐标值,布局为[xmin,ymin,xmax,ymax]。数据类型为float32或float64。
- **scores** (Variable) – 形为[N,C,M]的3-D张量,表示预测的置信度。 N是批大小(batch size),C是种类数目,M是边界框bounding box的数量。对于每个类别,存在对应于M个边界框的总M个分数。请注意,M等于bboxes的第二维。数据类型为float32或float64。
- **score_threshold** (float) – 过滤掉低置信度分数的边界框的阈值。
- **post_threshold** (float) – 经过NMS衰减后,过滤掉低置信度分数的边界框的阈值。
- **nms_top_k** (int) – 基于 score_threshold 的过滤检测后,根据置信度保留的最大检测次数。
- **keep_top_k** (int) – 经过NMS抑制后, 最终保留的最大检测次数。如果设置为 -1 ,则则保留全部。
- **use_gaussian** (bool) – 是否使用高斯函数衰减。默认值:False 。
- **gaussian_sigma** (float) – 高斯函数的Sigma值,默认值:2.0 。
- **background_label** (int) – 背景标签(类别)的索引,如果设置为 0 ,则忽略背景标签(类别)。如果设置为 -1 ,则考虑所有类别。默认值:0
- **normalized** (bool) – 检测是否已经经过正则化。默认值:True 。
- **return_index** (bool) – 是否同时返回保留检测框的序号。默认值:False 。
- **name** (str|None) – 具体用法请参见 :ref:`cn_api_guide_Name` ,一般无需设置,默认值为None。
返回:
- **Out** (Variable) - 形为[No,6]的2-D LoDTensor,表示检测结果。每行有6个值:[标签label,置信度confidence,xmin,ymin,xmax,ymax]。或形为[No,10]的2-D LoDTensor,用来表示检测结果。 每行有10个值:[标签label,置信度confidence,x1,y1,x2,y2,x3,y3,x4,y4]。 No是检测的总数。 如果对所有图像都没有检测到的box,则lod将设置为{1},而Out仅包含一个值-1。 (1.3版本之后,当未检测到box时,lod从{0}更改为{1})
- **Index** (Variable) - 形为[No,1]的2-D LoDTensor,表示检测结果在整个批次中的序号。
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
boxes = fluid.data(name='bboxes', shape=[None,81, 4],
dtype='float32', lod_level=1)
scores = fluid.data(name='scores', shape=[None,81],
dtype='float32', lod_level=1)
out = fluid.layers.matrix_nms(bboxes=boxes,
scores=scores,
background_label=0,
score_threshold=0.5,
post_threshold=0.1,
nms_top_k=400,
keep_top_k=200,
normalized=False)
.. _cn_api_fluid_layers_elementwise_mul:
.. _cn_api_fluid_layers_multiply:
elementwise_mul
multiply
-------------------------------
.. py:function:: paddle.fluid.layers.elementwise_mul(x, y, axis=-1, act=None, name=None)
.. py:function:: paddle.multiply(x, y, axis=-1, name=None)
:alias_main: paddle.elementwise_mul
:alias: paddle.elementwise_mul,paddle.tensor.elementwise_mul,paddle.tensor.math.elementwise_mul
:old_api: paddle.fluid.layers.elementwise_mul
:alias_main: paddle.multiply
:alias: paddle.multiply, paddle.tensor.multiply, paddle.tensor.math.multiply
......@@ -45,8 +44,7 @@ elementwise_mul
- **x** (Variable)- 多维 ``Tensor`` 或 ``LoDTensor`` 。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64``。
- **y** (Variable)- 多维 ``Tensor`` 或 ``LoDTensor`` 。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64``。
- **axis** (int32,可选)- ``y`` 的维度对应到 ``x`` 维度上时的索引。默认值为 -1。
- **act** (str,可选)- 激活函数名称,作用于输出上。默认值为None。详细请参考 :ref:`api_guide_activations` , 常见的激活函数有: ``relu`` ``tanh`` ``sigmoid`` 等。
- **name** (str,可选)- 输出的名字。默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
- **name** (string,可选)- 输出的名字。默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回: 维度与 ``x`` 相同的 ``Tensor`` 或 ``LoDTensor`` ,数据类型与 ``x`` 相同。
......@@ -57,64 +55,22 @@ elementwise_mul
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
def gen_data():
return {
"x": np.array([2, 3, 4]),
"y": np.array([1, 5, 2])
}
x = fluid.layers.data(name="x", shape=[3], dtype='float32')
y = fluid.layers.data(name="y", shape=[3], dtype='float32')
z = fluid.layers.elementwise_mul(x, y)
# z = x * y
place = fluid.CPUPlace()
exe = fluid.Executor(place)
z_value = exe.run(feed=gen_data(),
fetch_list=[z.name])
print(z_value) # [2., 15., 8.]
**代码示例 2**
paddle.enable_imperative()
x_data = np.array([[1, 2], [3, 4]], dtype=np.float32)
y_data = np.array([[5, 6], [7, 8]], dtype=np.float32)
x = paddle.imperative.to_variable(x_data)
y = paddle.imperative.to_variable(y_data)
res = paddle.multiply(x, y)
print(res.numpy()) # [[5, 12], [21, 32]]
x_data = np.array([[[1, 2, 3], [1, 2, 3]]], dtype=np.float32)
y_data = np.array([1, 2], dtype=np.float32)
x = paddle.imperative.to_variable(x_data)
y = paddle.imperative.to_variable(y_data)
res = paddle.multiply(x, y, axis=1)
print(res.numpy()) # [[[1, 2, 3], [2, 4, 6]]]
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
def gen_data():
return {
"x": np.random.randint(1, 5, size=[2, 3, 4, 5]).astype('float32'),
"y": np.random.randint(1, 5, size=[3, 4]).astype('float32')
}
x = fluid.layers.data(name="x", shape=[2,3,4,5], dtype='float32')
y = fluid.layers.data(name="y", shape=[3,4], dtype='float32')
z = fluid.layers.elementwise_mul(x, y, axis=1)
# z = x * y
place = fluid.CPUPlace()
exe = fluid.Executor(place)
z_value = exe.run(feed=gen_data(),
fetch_list=[z.name])
print(z_value) # z.shape=[2,3,4,5]
**代码示例 3**
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
def gen_data():
return {
"x": np.random.randint(1, 5, size=[2, 3, 4, 5]).astype('float32'),
"y": np.random.randint(1, 5, size=[5]).astype('float32')
}
x = fluid.layers.data(name="x", shape=[2,3,4,5], dtype='float32')
y = fluid.layers.data(name="y", shape=[3,4], dtype='float32')
z = fluid.layers.elementwise_mul(x, y, axis=3)
# z = x * y
place = fluid.CPUPlace()
exe = fluid.Executor(place)
z_value = exe.run(feed=gen_data(),
fetch_list=[z.name])
print(z_value) # z.shape=[2,3,4,5]
......
......@@ -43,15 +43,15 @@ nce
window_size = 5
words = []
for i in xrange(window_size):
words.append(fluid.layers.data(
name='word_{0}'.format(i), shape=[1], dtype='int64'))
for i in range(window_size):
words.append(fluid.data(
name='word_{0}'.format(i), shape=[-1, 1], dtype='int64'))
dict_size = 10000
label_word = int(window_size / 2) + 1
embs = []
for i in xrange(window_size):
for i in range(window_size):
if i == label_word:
continue
......@@ -64,7 +64,7 @@ nce
num_total_classes=dict_size, param_attr='nce.w_0',
bias_attr='nce.b_0')
# 或使用自定义分布
#or use custom distribution
dist = np.array([0.05,0.5,0.1,0.3,0.05])
loss = fluid.layers.nce(input=embs, label=words[label_word],
num_total_classes=5, param_attr='nce.w_1',
......
......@@ -3,7 +3,7 @@
not_equal
-------------------------------
.. py:function:: paddle.fluid.layers.not_equal(x, y, cond=None)
.. py:function:: paddle.fluid.layers.not_equal(x, y, cond=None, name=None)
:alias_main: paddle.not_equal
:alias: paddle.not_equal,paddle.tensor.not_equal,paddle.tensor.logic.not_equal
......@@ -17,7 +17,7 @@ not_equal
- **x** (Variable) – 进行比较的第一个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **y** (Variable) – 进行比较的第二个输入,是一个多维的Tensor,数据类型可以是float32,float64,int32,int64。
- **cond** (Variable,可选) – 如果为None,则创建一个Tensor来作为进行比较的输出结果,该Tensor的shape和数据类型和输入x一致;如果不为None,则将Tensor作为该OP的输出,数据类型和数据shape需要和输入x一致。默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,数据的shape和输入x一致。
返回类型:变量(Variable),数据类型为bool类型。
......
......@@ -5,21 +5,18 @@ ones
.. py:function:: paddle.fluid.layers.ones(shape,dtype,force_cpu=False)
**ones**
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为1的Tensor,该OP会将stop_gradient设置为True,即停止梯度更新。
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为1的Tensor。
参数:
- **shape** (tuple|list) - 输出Tensor的形状
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。
- **force_cpu** (bool) – 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。
- **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为bool、 float16、float32、float64、int32或int64。
- **force_cpu** (bool, 可选) – 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。
返回:值全为1的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable
抛出异常:
- ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64和None时。
- ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor时, 当 ``shape`` 为Tensor,其数据类型不是int32或者int64时。
**代码示例**:
......
......@@ -3,7 +3,7 @@
pow
-------------------------------
.. py:function:: paddle.fluid.layers.pow(x, factor=1.0, name=None)
.. py:function:: paddle.pow(x, exponent, name=None)
......@@ -12,16 +12,16 @@ pow
.. math::
out = x^{factor}
out = x^{exponent}
**注意:如果需要对输入进行 elementwise_pow 操作,请查使用** :ref:`cn_api_fluid_layers_elementwise_pow` 。
参数:
- **x** (Variable)- 多维 ``Tensor`` 或 ``LoDTensor`` ,数据类型为 ``float32`` 或 ``float64`` 。
- **factor** (float32|Variable,可选)- ``float32`` 或形状为[1]的 ``Tensor`` 或 ``LoDTensor``,数据类型为 ``float32``。Pow OP的指数因子。默认值:1.0
- **x** (Variable)- 多维 ``Variable``,数据类型为 ``float32`` 或 ``float64`` 。
- **exponent** (float32|Variable)- ``float32`` 或形状为[1]的 ``Variable``,数据类型为 ``float32``
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置。默认值: ``None``。
返回:维度与输入 `x` 相同的 ``Tensor`` 或 ``LoDTensor``,数据类型与 ``x`` 相同。
返回:维度与输入 `x` 相同的 ``Variable``,数据类型与 ``x`` 相同。
返回类型:Variable。
......@@ -30,18 +30,23 @@ pow
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
x = fluid.data(name="x", shape=[32,32], dtype="float32")
paddle.enable_imperative()
x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32")
# example 1: exponent is a float
x_data = np.array([1, 2, 3])
exponent = 2
x = paddle.imperative.to_variable(x_data)
res = paddle.pow(x, exponent)
print(res.numpy()) # [1 4 9]
# example 1: argument factor is float
y_1 = fluid.layers.pow(x, factor=2.0)
# y_1 is x^{2.0}
# example 2: exponent is a Variable
exponent = paddle.fill_constant(shape=[1], value=2, dtype='float32')
res = paddle.pow(x, exponent)
print(res.numpy()) # [1 4 9]
# example 2: argument factor is Variable
factor_tensor = fluid.layers.fill_constant([1], "float32", 3.0)
y_2 = fluid.layers.pow(x, factor=factor_tensor)
# y_2 is x^{3.0}
......
......@@ -34,10 +34,18 @@ PRROIPool运算
.. code-block:: python
## prroi_pool without batch_roi_num
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[490, 28, 28], dtype='float32')
rois = fluid.layers.data(name='rois', shape=[4], lod_level=1, dtype='float32')
pool_out = fluid.layers.prroi_pool(x, rois, 10, 1.0, 7, 7)
x = fluid.data(name='x', shape=[None, 490, 28, 28], dtype='float32')
rois = fluid.data(name='rois', shape=[None, 4], lod_level=1, dtype='float32')
pool_out = fluid.layers.prroi_pool(x, rois, 1.0, 7, 7)
## prroi_pool with batch_roi_num
batchsize=4
x2 = fluid.data(name='x2', shape=[batchsize, 490, 28, 28], dtype='float32')
rois2 = fluid.data(name='rois2', shape=[batchsize, 4], dtype='float32')
batch_rois_num = fluid.data(name='rois_nums', shape=[batchsize], dtype='int64')
pool_out2 = fluid.layers.prroi_pool(x2, rois2, 1.0, 7, 7, batch_roi_nums=batch_rois_num)
......
......@@ -116,7 +116,6 @@ py_reader
dtypes=['float32', 'int64'],
name='test_reader')
test_reader.decorate_paddle_reader(paddle.batch(mnist.test(), 512))
test_loss = network(test_reader)
fluid.Executor(fluid.CUDAPlace(0)).run(train_startup_prog)
......
......@@ -3,33 +3,32 @@
range
-------------------------------
.. py:function:: paddle.fluid.layers.range(start, end, step, dtype)
.. py:function:: paddle.fluid.layers.range(start, end, step, dtype, name=None)
注意:推荐使用 paddle.arange
该OP返回以步长 ``step`` 均匀分隔给定数值区间[``start``, ``end``)的1-D Tensor,数据类型为 ``dtype``。
该API根据step均匀分隔给定数值区间[start, end),并返回该分隔结果。
当 ``dtype`` 表示浮点类型时,为了避免浮点计算误差,建议给 ``end`` 加上一个极小值epsilon,使边界可以更加明确。
参数:
- **start** (float32 | float64 | int32 | int64 | Variable) - 区间起点,且区间包括此值, 当类型是Variable时,是shape为 `[1]` 的1-D Tensor。
- **end** (float32 | float64 | int32 | int64 | Variable) - 区间终点,通常区间不包括此值。但当step不是整数,且浮点数取整会影响输出的长度时例外。
- **step** (float32 | float64 | int32 | int64 | Variable) - 均匀分割的步长。
- **dtype** (str | core.VarDesc.VarType) - 输出Tensor的数据类型,可为 `'float32'`, `'float64'`, `'int32'`, `'int64'` 。
返回:均匀分割给定数值区间后得到的1-D Tensor, 数据类型为输入 `dtype` 。
- **start** (float|int|Tensor) - 区间起点(且区间包括此值)。当 ``start`` 类型是Tensor时,是形状为[1]且数据类型为int32、int64、float32、float64的Tensor。
- **end** (float|int|Tensor) - 区间终点(且通常区间不包括此值)。当 ``end`` 类型是Tensor时,是形状为[1]且数据类型为int32、int64、float32、float64的Tensor。
- **step** (float|int|Tensor) - 均匀分割的步长。当 ``step`` 类型是Tensor时,是形状为[1]且数据类型为int32、int64、float32、float64的Tensor。
- **dtype** (str|np.dtype|core.VarDesc.VarType) - 输出Tensor的数据类型,支持int32、int64、float32、float64。
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回类型:Variable
返回:
Tensor: 以步长 ``step`` 均匀分割给定数值区间[``start``, ``end``)后得到的1-D Tensor, 数据类型为 ``dtype`` 。
抛出异常:
- ``TypeError`` - 如果 ``dtype`` 不是int32、int64、float32、float64。
**代码示例**
代码示例
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.range(0, 10, 2, 'int32')
# [0, 2, 4, 6, 8]
......@@ -29,17 +29,14 @@ reciprocal 对输入Tensor取倒数
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.fill_constant(shape=[2], value=4, dtype='float32') #data=[4.0, 4.0]
result = fluid.layers.reciprocal(data) # result=[0.25, 0.25]
import paddle
import numpy as np
paddle.enable_imperative()
x_data = np.array([1, 2, 3, 4]).astype(np.float32)
x = paddle.imperative.to_variable(x_data)
res = paddle.%s(x)
print(res.numpy())
......
......@@ -23,13 +23,8 @@ reorder_lod_tensor_by_rank
注意:该OP对 ``X`` 进行的排序所依据的 ``LoDRankTable`` 不一定是在 ``X`` 的基础上得出来的。它可以由其他不同的序列得出,并由该OP依据这个 ``LoDRankTable`` 来对 ``X`` 排序。
参数:
- **x** (Variable) - 待根据提供的 ``rank_table`` 进行排序的LoDTensor
- **rank_table** (Variable) - 提供对 ``x`` 重新排列的 ``LoDRankTable`` 类型的顺序信息,构造方法举例如下:
.. code-block:: python
rank_data = fluid.layers.data(name=data_desc[1][0], shape=data_desc[1][1])
rank_table = fluid.layers.control_flow.lod_rank_table(rank_data)
- **x** (Variable) - 待根据提供的 ``rank_table`` 进行排序的LoDTensor.
- **rank_table** (Variable) - 提供对 ``x`` 重新排列的 ``LoDRankTable`` 类型的顺序信息.
返回: 重新排列后的LoDTensor
......@@ -40,15 +35,33 @@ reorder_lod_tensor_by_rank
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
data_desc = (['input', [9], 0], ['ref', [5], 1])
data = fluid.layers.data(name=data_desc[0][0], shape=data_desc[0][1])
rank_data = fluid.layers.data(name=data_desc[1][0], shape=data_desc[1][1])
table = fluid.layers.control_flow.lod_rank_table(rank_data)
rank_data = fluid.layers.data(name='rank_data', shape=[5], dtype='float32', lod_level=2)
table = fluid.layers.control_flow.lod_rank_table(rank_data, level=1)
data = fluid.layers.data(name='data', shape=[9], lod_level=2)
new_data = fluid.layers.reorder_lod_tensor_by_rank(
x=data, rank_table=table)
place=fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
rank_tensor = fluid.create_lod_tensor(np.random.random([14,5]).astype("float32"), [[4,1], [3, 2, 2, 3, 4]], place)
data_ndarray = np.random.random([27, 9]).astype("float32")
data_lod = [[1, 2, 2, 4, 4], [2, 2, 4, 2, 2, 2, 1, 1, 2, 2, 4, 2, 1]]
data_tensor = fluid.create_lod_tensor(data_ndarray, data_lod, place)
out = exe.run(fluid.default_main_program(),feed={'data':data_tensor, 'rank_data':rank_tensor}, fetch_list=[new_data], return_numpy=False)
print(out[0])
# lod: {{0, 4, 5, 9, 11, 13}{0, 2, 6, 8, 9, 11, 13, 14, 15, 17, 19, 23, 25, 27}}
#shape: [27, 9]
......@@ -68,9 +68,9 @@ retinanet_target_assign
gt_boxes = fluid.data(name='gt_boxes', shape=[10, 4],
dtype='float32')
gt_labels = fluid.data(name='gt_labels', shape=[10, 1],
dtype='float32')
dtype='int32')
is_crowd = fluid.data(name='is_crowd', shape=[1],
dtype='float32')
dtype='int32')
im_info = fluid.data(name='im_info', shape=[1, 3],
dtype='float32')
score_pred, loc_pred, score_target, loc_target, bbox_inside_weight, fg_num = \
......
......@@ -15,10 +15,31 @@ reverse
该OP对输入Tensor ``x`` 在指定轴 ``axis`` 上进行数据的逆序操作。
参数:
- **x** (Variable) - 多维Tensor,类型必须为int32,int64,float32,float64。
- **axis** (int|tuple|list) - 指定逆序运算的轴,取值范围是[-R, R),R是输入 ``x`` 的Rank, ``axis`` 为负时与 ``axis`` +R 等价。如果 ``axis`` 是一个元组或列表,则在``axis`` 每个元素值所指定的轴上进行逆序运算。
::
示例1:
输入是 LoDTensor 类型:
x = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
axis = [0, 1]
输出:
output = [[8, 7, 6], [5, 4, 3], [2, 1, 0]]
示例2:
输入是 LoDTensorArray 类型:
x = {[[0, 1], [2, 3]],
[[4, 5, 6]],
[[7], [8], [9]]}
axis = 0
输出:
output = {[[7], [8], [9]],
[[4, 5, 6]],
[[0, 1], [2, 3]]}
参数:
- **x** (Variable) - 输入为Tensor或LoDTensorArray,数据类型支持bool,int8,int32,int64,float32和float64。若输入是LoDTensorArray类型,则返回一个逆序的LoDTensorArray,其内部Tensor元素的次序保持不变。
- **axis** (int|tuple|list) - 指定逆序运算的轴,取值范围是[-R, R),R是输入 ``x`` 的Rank, ``axis`` 为负时与 ``axis`` +R 等价。如果 ``axis`` 是一个元组或列表,则在 ``axis`` 每个元素值所指定的轴上进行逆序运算。如果输入是LoDTensorArray类型,axis须是值为0的int,或shape为[1]的list ``[0]`` 、元组 ``(0,)`` 。
返回:逆序后的Tensor,形状、数据类型和 ``x`` 一致。
返回类型:Variable
......@@ -32,3 +53,13 @@ reverse
data = fluid.layers.assign(np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]], dtype='float32')) # [[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]
result1 = fluid.layers.reverse(data, 0) # [[6., 7., 8.], [3., 4., 5.], [0., 1., 2.]]
result2 = fluid.layers.reverse(data, [0, 1]) # [[8., 7., 6.], [5., 4., 3.], [2., 1., 0.]]
# 输入为LoDTensorArray时
data1 = fluid.layers.assign(np.array([[0, 1, 2]], dtype='float32'))
data2 = fluid.layers.assign(np.array([[3, 4, 5]], dtype='float32'))
tensor_array = fluid.layers.create_array(dtype='float32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=0)
fluid.layers.array_write(data1, i, tensor_array)
fluid.layers.array_write(data2, i+1, tensor_array)
reversed_tensor_array = fluid.layers.reverse(tensor_array, 0) # {[[3, 4, 5]], [[0, 1, 2]]}
......@@ -62,7 +62,7 @@ scale
import numpy as np
inputs = fluid.layers.data(name="x", shape=[2, 3], dtype='float32')
scale = fluid.layers.data(name="scale", shape=[1], dtype='float32'
scale = fluid.layers.data(name="scale", shape=[1], dtype='float32',
append_batch_size=False)
output = fluid.layers.scale(inputs, scale = scale, bias = 1.0)
......
......@@ -13,12 +13,30 @@ shape
shape层。
获得输入Tensor的shape。
获得输入Tensor或SelectedRows的shape。
::
示例1:
输入是 N-D Tensor类型:
input = [ [1, 2, 3, 4], [5, 6, 7, 8] ]
输出shape:
input.shape = [2, 4]
示例2:
输入是 SelectedRows类型:
input.rows = [0, 4, 19]
input.height = 20
input.value = [ [1, 2], [3, 4], [5, 6] ] # inner tensor
输出shape:
input.shape = [3, 2]
参数:
- **input** (Variable)- 输入的多维Tensor,数据类型为float32,float64,int32,int64。
- **input** (Variable)- 输入的多维Tensor或SelectedRows,数据类型为float16,float32,float64,int32,int64。如果输入是SelectedRows类型,则返回其内部持有Tensor的shape。
返回: 一个Tensor,表示输入Tensor的shape。
返回: 一个Tensor,表示输入Tensor或SelectedRows的shape。
返回类型: Variable(Tensor)。
......@@ -29,7 +47,7 @@ shape层。
import paddle.fluid as fluid
import numpy as np
inputs = fluid.layers.data(name="x", shape=[3, 100, 100], dtype="float32")
inputs = fluid.data(name="x", shape=[3, 100, 100], dtype="float32")
output = fluid.layers.shape(inputs)
exe = fluid.Executor(fluid.CPUPlace())
......
.. _cn_api_fluid_layers_shuffle:
shuffle
-------------------------------
.. py:function:: paddle.fluid.layers.shuffle(reader, buffer_size)
创建一个特殊的数据读取器,它的输出数据会被重洗(shuffle)。由原始读取器创建的迭代器得到的输出将会被暂存到shuffle缓存区,其后
会对其进行重洗运算。shuffle缓存区的大小由参数 ``buffer_size`` 决定。
参数:
- **reader** (callable) – 输出会被shuffle的原始reader
- **buffer_size** (int) – 进行shuffle的buffer的大小
返回:其输出会被shuffle的一个reader(读取器)
返回类型:callable
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
raw_reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
'./data2.recordio'],
shapes=[(3,224,224), (1,)],
lod_levels=[0, 0],
dtypes=['float32', 'int64'],
thread_num=2,
buffer_size=2)
batch_reader = fluid.layers.batch(reader=raw_reader, batch_size=5)
shuffle_reader = fluid.layers.shuffle(reader=batch_reader, buffer_size=5000)
......@@ -47,13 +47,70 @@ Focal Loss的计算过程如下:
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
input = fluid.data(name='data', shape=[10,80], dtype='float32')
label = fluid.data(name='label', shape=[10,1], dtype='int32')
fg_num = fluid.data(name='fg_num', shape=[1], dtype='int32')
loss = fluid.layers.sigmoid_focal_loss(x=input,
label=label,
fg_num=fg_num,
gamma=2.0,
alpha=0.25)
num_classes = 10 # exclude background
image_width = 16
image_height = 16
batch_size = 32
max_iter = 20
def gen_train_data():
x_data = np.random.uniform(0, 255, (batch_size, 3, image_height,
image_width)).astype('float64')
label_data = np.random.randint(0, num_classes,
(batch_size, 1)).astype('int32')
return {"x": x_data, "label": label_data}
def get_focal_loss(pred, label, fg_num, num_classes):
pred = fluid.layers.reshape(pred, [-1, num_classes])
label = fluid.layers.reshape(label, [-1, 1])
label.stop_gradient = True
loss = fluid.layers.sigmoid_focal_loss(
pred, label, fg_num, gamma=2.0, alpha=0.25)
loss = fluid.layers.reduce_sum(loss)
return loss
def build_model(mode='train'):
x = fluid.data(name="x", shape=[-1, 3, -1, -1], dtype='float64')
output = fluid.layers.pool2d(input=x, pool_type='avg', global_pooling=True)
output = fluid.layers.fc(
input=output,
size=num_classes,
# Notice: size is set to be the number of target classes (excluding backgorund)
# because sigmoid activation will be done in the sigmoid_focal_loss op.
act=None)
if mode == 'train':
label = fluid.data(name="label", shape=[-1, 1], dtype='int32')
# Obtain the fg_num needed by the sigmoid_focal_loss op:
# 0 in label represents background, >=1 in label represents foreground,
# find the elements in label which are greater or equal than 1, then
# computed the numbers of these elements.
data = fluid.layers.fill_constant(shape=[1], value=1, dtype='int32')
fg_label = fluid.layers.greater_equal(label, data)
fg_label = fluid.layers.cast(fg_label, dtype='int32')
fg_num = fluid.layers.reduce_sum(fg_label)
fg_num.stop_gradient = True
avg_loss = get_focal_loss(output, label, fg_num, num_classes)
return avg_loss
else:
# During evaluating or testing phase,
# output of the final fc layer should be connected to a sigmoid layer.
pred = fluid.layers.sigmoid(output)
return pred
loss = build_model('train')
moment_optimizer = fluid.optimizer.MomentumOptimizer(
learning_rate=0.001, momentum=0.9)
moment_optimizer.minimize(loss)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
for i in range(max_iter):
outs = exe.run(feed=gen_train_data(), fetch_list=[loss.name])
print(outs)
......@@ -5,12 +5,6 @@ softmax
.. py:function:: paddle.fluid.layers.softmax(input, use_cudnn=False, name=None, axis=-1)
:alias_main: paddle.nn.functional.softmax
:alias: paddle.nn.functional.softmax,paddle.nn.functional.activation.softmax
:old_api: paddle.fluid.layers.softmax
该OP实现了softmax层。OP的计算过程如下:
步骤1:输入 ``input`` 的 ``axis`` 维会被置换到最后一维;
......
......@@ -3,7 +3,7 @@
split
-------------------------------
.. py:function:: paddle.fluid.layers.split(input,num_or_sections,dim=-1,name=None)
.. py:function:: paddle.fluid.layers.split(input, num_or_sections, dim=-1, name=None)
......@@ -11,18 +11,18 @@ split
该OP将输入Tensor分割成多个子Tensor。
参数:
- **input** (Variable) - 输入变量,数据类型为float32,float64,int32,int64的多维Tensor或者LoDTensor。
- **input** (Tensor) - 输入变量,数据类型为bool, float16,float32,float64,int32,int64的多维Tensor。
- **num_or_sections** (int|list|tuple) - 如果 ``num_or_sections`` 是一个整数,则表示Tensor平均划分为相同大小子Tensor的数量。如果 ``num_or_sections`` 是一个list或tuple,那么它的长度代表子Tensor的数量,它的元素可以是整数或者形状为[1]的Tensor,依次代表子Tensor需要分割成的维度的大小。list或tuple的长度不能超过输入Tensor待分割的维度的大小。至多有一个元素值为-1,-1表示该值是由 ``input`` 待分割的维度值和 ``num_or_sections`` 的剩余元素推断出来的。
- **dim** (int|Variable,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果dim < 0,则划分的维度为rank(input) + dim。默认值为-1。
- **dim** (int|Tenspr,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果 ``dim < 0`` ,则划分的维度为 ``rank(input) + dim`` 。默认值为-1。
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:分割后的Tensor列表。
返回类型:列表(Variable(Tensor|LoDTensor)),数据类型为int32,int64,float32,float64。
抛出异常:
- :code:`TypeError`:``num_or_sections`` 不是int、list 或 tuple。
- :code:`TypeError`:``dim`` 不是 int 或 Variable。
- :code:`TypeError`:``input`` 的数据类型不是bool、float16、float32、float64、int32或int64时 。
- :code:`TypeError`:``num_or_sections`` 不是int、list 或 tuple时。
- :code:`TypeError`:``dim`` 不是 int 或 Tensor时。当 ``dim`` 为Tensor,其数据类型不是int32或int64时。
**代码示例**:
......@@ -30,27 +30,31 @@ split
import paddle.fluid as fluid
# 输入是维度为[3, 9, 5]的Tensor:
# input is a Tensor which shape is [3, 9, 5]
input = fluid.data(
name="input", shape=[3, 9, 5], dtype="float32")
# 传入num_or_sections为一个整数
x0, x1, x2 = fluid.layers.split(input, num_or_sections=3, dim=1)
x0.shape # [3, 3, 5]
x1.shape # [3, 3, 5]
x2.shape # [3, 3, 5]
# 传入num_or_sections为一个整数列表
x0, x1, x2 = fluid.layers.split(input, num_or_sections=[2, 3, 4], dim=1)
x0.shape # [3, 2, 5]
x1.shape # [3, 3, 5]
x2.shape # [3, 4, 5]
# 传入num_or_sections为一个整数列表,其中有一个元素为-1
x0, x1, x2 = fluid.layers.split(input, num_or_sections=[2, 3, -1], dim=1)
x0.shape # [3, 2, 5]
x1.shape # [3, 3, 5]
x2.shape # [3, 4, 5]
out0, out1, out2 = fluid.layers.split(input, num_or_sections=3, dim=1)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
out0, out1, out2 = fluid.layers.split(input, num_or_sections=[2, 3, 4], dim=1)
# out0.shape [3, 2, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 4, 5]
out0, out1, out2 = fluid.layers.split(input, num_or_sections=[2, 3, -1], dim=1)
# out0.shape [3, 2, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 4, 5]
# dim is negative, the real dim is (rank(input) + axis) which real
# value is 1.
out0, out1, out2 = fluid.layers.split(input, num_or_sections=3, dim=-2)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
......
......@@ -3,12 +3,12 @@
uniform_random
-------------------------------
.. py:function:: paddle.fluid.layers.uniform_random(shape, dtype='float32', min=-1.0, max=1.0, seed=0)
.. py:function:: paddle.fluid.layers.uniform_random(shape, dtype='float32', min=-1.0, max=1.0, seed=0, name=None)
该OP使用从范围[min,max)内均匀分布采样的随机值初始化一个Tensor
该OP返回数值服从范围[``min``, ``max``)内均匀分布的随机Tensor,形状为 ``shape``,数据类型为 ``dtype``
::
......@@ -19,18 +19,19 @@ uniform_random
result=[[0.8505902, 0.8397286]]
参数:
- **shape** (list|tuple|Variable)-输出Tensor的维度,shape类型支持list,tuple,Variable。如果shape类型是list或者tuple,它的元素可以是整数或者形状为[1]的Tensor,其中整数的数据类型为int,Tensor的数据类型为int32或int64。如果shape的类型是Variable,则是1D的Tensor,Tensor的数据类型为int32或int64。
- **dtype** (np.dtype|core.VarDesc.VarType|str,可选) – 输出Tensor的数据类型,支持float32(默认), float64。
- **min** (float,可选)-要生成的随机值范围的下限,min包含在范围中。支持的数据类型:float。默认值为-1.0。
- **max** (float,可选)-要生成的随机值范围的上限,max不包含在范围中。支持的数据类型:float。默认值为1.0。
- **seed** (int,可选)-随机种子,用于生成样本。0表示使用系统生成的种子。注意如果种子不为0,该操作符每次都生成同样的随机数。支持的数据类型:int。默认为 0。
- **shape** (list|tuple|Tensor) - 生成的随机Tensor的形状。如果 ``shape`` 是list、tuple,则其中的元素可以是int,或者是形状为[1]且数据类型为int32、int64的Tensor。如果 ``shape`` 是Tensor,则是数据类型为int32、int64的1-D Tensor。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持float32、float64。默认值为float32。
- **min** (float|int,可选) - 要生成的随机值范围的下限,min包含在范围中。支持的数据类型:float、int。默认值为-1.0。
- **max** (float|int,可选) - 要生成的随机值范围的上限,max不包含在范围中。支持的数据类型:float、int。默认值为1.0。
- **seed** (int,可选) - 随机种子,用于生成样本。0表示使用系统生成的种子。注意如果种子不为0,该操作符每次都生成同样的随机数。支持的数据类型:int。默认为 0。
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回:表示一个随机初始化结果的Tensor,该Tensor的数据类型由dtype参数决定,该Tensor的维度由shape参数决定。
返回类型:Variable
返回:
Tensor:数值服从范围[``min``, ``max``)内均匀分布的随机Tensor,形状为 ``shape``,数据类型为 ``dtype``。
抛出异常:
- :code:`TypeError`: shape的类型应该是list、tuple 或 Variable。
- ``TypeError`` - 如果 ``shape`` 的类型不是list、tuple、Tensor。
- ``TypeError`` - 如果 ``dtype`` 不是float32、float64。
**代码示例**:
......@@ -43,17 +44,17 @@ uniform_random
train_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
# example 1:
# attr shape is a list which doesn't contain tensor Variable.
# attr shape is a list which doesn't contain Tensor.
result_1 = fluid.layers.uniform_random(shape=[3, 4])
# example 2:
# attr shape is a list which contains tensor Variable.
# attr shape is a list which contains Tensor.
dim_1 = fluid.layers.fill_constant([1],"int64",3)
dim_2 = fluid.layers.fill_constant([1],"int32",5)
result_2 = fluid.layers.uniform_random(shape=[dim_1, dim_2])
# example 3:
# attr shape is a Variable, the data type must be int32 or int64
# attr shape is a Tensor, the data type must be int32 or int64
var_shape = fluid.data(name='var_shape', shape=[2], dtype="int64")
result_3 = fluid.layers.uniform_random(var_shape)
var_shape_int32 = fluid.data(name='var_shape_int32', shape=[2], dtype="int32")
......
......@@ -27,7 +27,7 @@ unique为 ``x`` 返回一个unique张量和一个指向该unique张量的索引
import numpy as np
import paddle.fluid as fluid
x = fluid.assign(np.array([2, 3, 3, 1, 5, 3], dtype='int32'))
x = fluid.layers.assign(np.array([2, 3, 3, 1, 5, 3], dtype='int32'))
out, index = fluid.layers.unique(x) # out is [2, 3, 1, 5]; index is [0, 1, 1, 2, 3, 1]
......
......@@ -14,7 +14,7 @@ unstack
该OP将单个dim为 ``D`` 的Tensor沿 ``axis`` 轴unpack为 ``num`` 个dim为 ``(D-1)`` 的Tensor
参数:
- **x** (Variable) – 输入x为 ``dim > 0`` 的Tensor,
- **x** (Tensor) – 输入x为 ``dim > 0`` 的Tensor,
支持的数据类型: float32,float64,int32,int64。
- **axis** (int | 可选) – 输入Tensor进行unpack运算所在的轴,axis的范围为:``[-D, D)`` ,
......@@ -24,7 +24,7 @@ unstack
返回: 长度为num的Tensor列表, 数据类型与输入Tensor相同,dim为 ``(D-1)``。
返回类型: list(Variable)
返回类型: list(Tensor)
抛出异常:
- :code:`ValueError`:``x.shape[axis]`` <= 0 或 ``axis`` 不在[-D, D)范围内
......@@ -34,7 +34,7 @@ unstack
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[2, 3, 5], dtype='float32') #创建一个shape=[2, 3, 5]的Tensor
x = fluid.data(name='x', shape=[2, 3, 5], dtype='float32') #创建一个shape=[2, 3, 5]的Tensor
y = fluid.layers.unstack(x, axis=1) #沿着第1轴进行unpack, unpack后为3个shape=[2,5]的Tensor
......
......@@ -5,21 +5,18 @@ zeros
.. py:function:: paddle.fluid.layers.zeros(shape,dtype,force_cpu=False)
**zeros**
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为0的Tensor,该OP会将stop_gradient设置为True,即停止梯度更新。
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为0的Tensor。
参数:
- **shape** (tuple|list) - 输出Tensor的形状
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。
- **force_cpu** (bool) - 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。
- **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为bool、 float16、float32、float64、int32或int64。
- **force_cpu** (bool, 可选) - 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。
返回:值全为0的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable
抛出异常:
- ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64。
- ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor时。 当 ``shape`` 为Tensor,其数据类型不是int32或者int64时。
**代码示例**:
......
......@@ -18,6 +18,7 @@ paddle.nn
nn_cn/Upsample_cn.rst
nn_cn/activation_cn.rst
nn_cn/loss_cn.rst
nn_cn/functional_cn.rst
nn_cn/adaptive_pool2d_cn.rst
nn_cn/adaptive_pool3d_cn.rst
nn_cn/add_position_encoding_cn.rst
......@@ -94,7 +95,7 @@ paddle.nn
nn_cn/logsigmoid_cn.rst
nn_cn/log_loss_cn.rst
nn_cn/lrn_cn.rst
nn_cn/margin_rank_loss_cn.rst
nn_cn/margin_ranking_loss_cn.rst
nn_cn/maxout_cn.rst
nn_cn/mse_loss_cn.rst
nn_cn/multiclass_nms_cn.rst
......@@ -105,6 +106,7 @@ paddle.nn
nn_cn/pad2d_cn.rst
nn_cn/pad_cn.rst
nn_cn/pad_constant_like_cn.rst
nn_cn/PairwiseDistance_cn.rst
nn_cn/ParameterList_cn.rst
nn_cn/piecewise_decay_cn.rst
nn_cn/pixel_shuffle_cn.rst
......@@ -160,3 +162,6 @@ paddle.nn
nn_cn/while_loop_cn.rst
nn_cn/yolov3_loss_cn.rst
nn_cn/yolo_box_cn.rst
nn_cn/loss_cn/MarginRankingLoss_cn.rst
nn_cn/functional_cn/margin_ranking_loss_cn.rst
.. _cn_api_nn_PairwiseDistance:
PairwiseDistance
-------------------------------
.. py:class:: paddle.nn.PairwiseDistance(p=2., epsilon=1e-6, keepdim=False, name=None)
该OP计算两个向量(输入 ``x``、``y`` )之间pairwise的距离。该距离通过p范数计算:
.. math::
\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right ) ^ {1/p}.
参数
::::::::
- **p** (float,可选)- 指定p阶的范数。默认值为2。
- **epsilon** (float,可选)- 添加到分母的一个很小值,避免发生除零错误。默认值为1e-6。
- **keepdim** (bool,可选)- 是否保留输出张量减少的维度。输出结果相对于 ``|x-y|`` 的结果减少一维,除非 :attr:`keepdim` 为True,默认值为False。
- **name** (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name` 。
形状
::::::::
- **x** (Tensor) - :math:`(N, D)` ,其中D是向量的维度,数据类型为float32或float64。
- **y** (Tensor) - :math:`(N, D)` ,与 ``x`` 的形状、数据类型相同。
- **output** (Tensor) - :math:`(N)` ,如果 :attr:`keepdim` 为True,则形状为 :math:`(N, 1)` 。数据类型与 ``x``、 ``y`` 相同。
代码示例
::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x_np = np.array([[1., 3.], [3., 5.]]).astype(np.float64)
y_np = np.array([[5., 6.], [7., 8.]]).astype(np.float64)
x = paddle.to_variable(x_np)
y = paddle.to_variable(y_np)
dist = paddle.nn.PairwiseDistance()
distance = dist(x, y)
print(distance.numpy()) # [5. 5.]
......@@ -8,4 +8,5 @@ activation
.. toctree::
:maxdepth: 1
activation_cn/LeakyReLU_cn.rst
activation_cn/Sigmoid_cn.rst
.. _cn_api_nn_LeakyReLU:
LeakyReLU
-------------------------------
.. py:class:: paddle.nn.LeakyReLU(alpha=0.01, name=None)
ReLU (Rectified Linear Unit)激活层
.. math::
\\Out = max(x, alpha*x)\\
其中,:math:`x` 为输入的 Tensor
参数
::::::::::
- alpha (float,可选) - :math:`x < 0` 时的斜率。默认值为0.01。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
形状:
- input: 任意形状的Tensor。
- output: 和input具有相同形状的Tensor。
代码示例
:::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
lrelu = paddle.nn.LeakyReLU()
x = paddle.imperative.to_variable(np.array([-2, 0, 1], 'float32'))
out = lrelu(x) # [-0.02, 0, 1]
=======================
functional
=======================
.. toctree::
:maxdepth: 1
functional_cn/l1_loss_cn.rst
functional_cn/nll_loss_cn.rst
functional_cn/margin_ranking_loss_cn.rst
l1_loss
-------------------------------
.. py:function:: paddle.nn.functional.l1_loss(x, label, reduction='mean', name=None)
该接口计算输入 ``x`` 和标签 ``label`` 间的 `L1 loss` 损失。
该损失函数的数学计算公式如下:
当 `reduction` 设置为 ``'none'`` 时,
.. math::
Out = \lvert x - label\rvert
当 `reduction` 设置为 ``'mean'`` 时,
.. math::
Out = MEAN(\lvert x - label\rvert)
当 `reduction` 设置为 ``'sum'`` 时,
.. math::
Out = SUM(\lvert x - label\rvert)
参数
:::::::::
- **x** (Tensor): - 输入的Tensor,维度是[N, *], 其中N是batch size, `*` 是任意数量的额外维度。数据类型为:float32、float64、int32、int64。
- **label** (Tensor): - 标签,维度是[N, *], 与 ``x`` 相同。数据类型为:float32、float64、int32、int64。
- **reduction** (str, 可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 `L1Loss` 的均值;设置为 ``'sum'`` 时,计算 `L1Loss` 的总和;设置为 ``'none'`` 时,则返回 `L1Loss`。
- **name** (str,可选): - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回
:::::::::
``Tensor``, 输入 ``x`` 和标签 ``label`` 间的 `L1 loss` 损失。如果 :attr:`reduction` 是 ``'none'``, 则输出Loss的维度为 [N, *], 与输入 ``x`` 相同。如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出Loss的维度为 [1]。
代码示例
:::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x_data = np.array([[1.5, 0.8], [0.2, 1.3]]).astype("float32")
label_data = np.array([[1.7, 1], [0.4, 0.5]]).astype("float32")
x = paddle.to_variable(x_data)
label = paddle.to_variable(label_data)
l1_loss = paddle.nn.functional.l1_loss(x, label)
print(l1_loss.numpy())
# [0.35]
l1_loss = paddle.nn.functional.l1_loss(x, label, reduction='none')
print(l1_loss.numpy())
# [[0.20000005 0.19999999]
# [0.2 0.79999995]]
l1_loss = paddle.nn.functional.l1_loss(x, label, reduction='sum')
print(l1_loss.numpy())
# [1.4]
.. _cn_api_nn_cn_margin_ranking_loss:
margin_ranking_loss
-------------------------------
.. py:function:: paddle.nn.functional.margin_ranking_loss(input, other, label, margin=0.0, reduction='mean', name=None)
该算子计算输入input,other 和 标签label间的 `margin rank loss` 损失。该损失函数的数学计算公式如下:
.. math::
margin\_rank\_loss = max(0, -label * (input - other) + margin)
当 `reduction` 设置为 ``'mean'`` 时,
.. math::
Out = MEAN(margin\_rank\_loss)
当 `reduction` 设置为 ``'sum'`` 时,
.. math::
Out = SUM(margin\_rank\_loss)
当 `reduction` 设置为 ``'none'`` 时,直接返回最原始的 `margin_rank_loss` 。
参数
::::::::
- **input** (Tensor):第一个输入的 `Tensor` ,数据类型为:float32、float64。
- **other** (Tensor):第二个输入的 `Tensor` ,数据类型为:float32、float64。
- **label** (Tensor):训练数据的标签,数据类型为:float32, float64。
- **margin** (float,可选): - 用于加和的margin值,默认值为0。
- **reduction** (string,可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'`` 、 ``'mean'`` 、 ``'sum'`` 。如果设置为 ``'none'`` ,则直接返回 最原始的 ``margin_rank_loss`` 。如果设置为 ``'sum'`` ,则返回 ``margin_rank_loss`` 的总和。如果设置为 ``'mean'`` ,则返回 ``margin_rank_loss`` 的平均值。默认值为 ``'none'`` 。
- **name** (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回
::::::::
Tensor, 如果 :attr:`reduction` 为 ``'sum'`` 或者是 ``'mean'`` ,则形状为 :math:`[1]` ,否则shape和输入 `input` 保持一致 。数据类型与 ``input``、 ``other`` 相同。
代码示例
::::::::
.. code-block:: python
import numpy as np
import paddle
paddle.disable_static()
input = paddle.to_variable(np.array([[1, 2], [3, 4]]).astype('float32'))
other = paddle.to_variable(np.array([[2, 1], [2, 4]]).astype('float32'))
label = paddle.to_variable(np.array([[1, -1], [-1, -1]]).astype('float32'))
loss = paddle.nn.functional.margin_ranking_loss(input, other, label)
print(loss.numpy()) # [0.75]
.. _cn_api_nn_functional_nll_loss:
nll_loss
-------------------------------
.. py:function:: paddle.nn.functional.nll_loss(input, label, weight=None, ignore_index=-100, reduction='mean', name=None)
该接口返回 `negative log likelihood` 。可在 :ref:`cn_api_nn_loss_NLLLoss` 查看详情。
参数
:::::::::
- **input** (Tensor): - 输入 `Tensor`, 其形状为 :math:`[N, C]` , 其中 `C` 为类别数。但是对于多维度的情形下,它的形状为 :math:`[N, C, d_1, d_2, ..., d_K]` 。数据类型为float32或float64。
- **label** (Tensor): - 输入x对应的标签值。其形状为 :math:`[N,]` 或者 :math:`[N, d_1, d_2, ..., d_K]`, 数据类型为int64。
- **weight** (Tensor, 可选): - 手动指定每个类别的权重。其默认为 `None` 。如果提供该参数的话,长度必须为 `num_classes` 。数据类型为float32或float64。
- **ignore_index** (int64, 可选): - 指定一个忽略的标签值,此标签值不参与计算。默认值为-100。数据类型为int64。
- **reduction** (str, 可选): - 指定应用于输出结果的计算方式,可选值有: `none`, `mean`, `sum` 。默认为 `mean` ,计算 `mini-batch` loss均值。设置为 `sum` 时,计算 `mini-batch` loss的总和。设置为 `none` 时,则返回loss Tensor。数据类型为string。
- **name** (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name` 。
返回
:::::::::
`Tensor` ,返回存储表示 `negative log likelihood loss` 的损失值。
代码示例
:::::::::
.. code-block:: python
import paddle
import numpy as np
from paddle.nn.functional import nll_loss
log_softmax = paddle.nn.LogSoftmax(axis=1)
input_np = np.array([[0.88103855, 0.9908683 , 0.6226845 ],
[0.53331435, 0.07999352, 0.8549948 ],
[0.25879037, 0.39530203, 0.698465 ],
[0.73427284, 0.63575995, 0.18827209],
[0.05689114, 0.0862954 , 0.6325046 ]]).astype(np.float32)
label_np = np.array([0, 2, 1, 1, 0]).astype(np.int64)
place = paddle.CPUPlace()
paddle.disable_static(place)
input = paddle.to_variable(input_np)
log_out = log_softmax(input)
label = paddle.to_variable(label_np)
result = nll_loss(log_out, label)
print(result.numpy()) # [1.0720209]
......@@ -11,5 +11,6 @@ loss
loss_cn/BCELoss_cn.rst
loss_cn/CrossEntropyLoss_cn.rst
loss_cn/L1Loss_cn.rst
loss_cn/MarginRankingLoss_cn.rst
loss_cn/MSELoss_cn.rst
loss_cn/NLLLoss_cn.rst
L1Loss
-------------------------------
.. py:function:: paddle.nn.loss.L1Loss(reduction='mean')
.. py:class:: paddle.nn.loss.L1Loss(reduction='mean', name=None)
该接口用于创建一个L1Loss的可调用类,L1Loss计算输入input和标签label间的 `L1 loss` 损失。
该接口用于创建一个L1Loss的可调用类,L1Loss计算输入x和标签label间的 `L1 loss` 损失。
该损失函数的数学计算公式如下:
当 `reduction` 设置为 ``'none'`` 时,
.. math::
Out = |input - label|
Out = \lvert x - label\rvert
当 `reduction` 设置为 ``'mean'`` 时,
.. math::
Out = MEAN(|input - label|)
Out = MEAN(\lvert x - label\rvert)
当 `reduction` 设置为 ``'sum'`` 时,
.. math::
Out = SUM(|input - label|)
Out = SUM(\lvert x - label\rvert)
输入input和标签label的维度是[N, *], 其中N是batch_size, `*` 是任意其他维度。
如果 :attr:`reduction` 是 ``'none'``, 则输出Loss的维度为 [N, *], 与输入input相同。
如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出Loss的维度为 [1]。
参数:
- **reduction** (string, 可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 `L1Loss` 的均值;设置为 ``'sum'`` 时,计算 `L1Loss` 的总和;设置为 ``'none'`` 时,则返回L1Loss。数据类型为string。
参数
:::::::::
- **reduction** (str, 可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 `L1Loss` 的均值;设置为 ``'sum'`` 时,计算 `L1Loss` 的总和;设置为 ``'none'`` 时,则返回 `L1Loss`。
- **name** (str,可选): - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回:返回计算L1Loss的可调用对象。
形状
:::::::::
- **x** (Tensor): - 输入的Tensor,维度是[N, *], 其中N是batch size, `*` 是任意数量的额外维度。数据类型为:float32、float64、int32、int64。
- **label** (Tensor): - 标签,维度是[N, *], 与 ``x`` 相同。数据类型为:float32、float64、int32、int64。
- **output** (Tensor): - 输入 ``x`` 和标签 ``label`` 间的 `L1 loss` 损失。如果 :attr:`reduction` 是 ``'none'``, 则输出Loss的维度为 [N, *], 与输入 ``x`` 相同。如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出Loss的维度为 [1]。
**代码示例**
代码示例
:::::::::
.. code-block:: python
# declarative mode
import paddle.fluid as fluid
import numpy as np
import paddle
input = fluid.data(name="input", shape=[1])
label = fluid.data(name="label", shape=[1])
l1_loss = paddle.nn.loss.L1Loss(reduction='mean')
output = l1_loss(input,label)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
input_data = np.array([1.5]).astype("float32")
label_data = np.array([1.7]).astype("float32")
output_data = exe.run(fluid.default_main_program(),
feed={"input":input_data, "label":label_data},
fetch_list=[output],
return_numpy=True)
print(output_data) # [array([0.2], dtype=float32)]
# imperative mode
import paddle.fluid.dygraph as dg
with dg.guard(place) as g:
input = dg.to_variable(input_data)
label = dg.to_variable(label_data)
l1_loss = paddle.nn.loss.L1Loss(reduction='mean')
output = l1_loss(input,label)
print(output.numpy()) # [0.2]
import numpy as np
paddle.disable_static()
x_data = np.array([[1.5, 0.8], [0.2, 1.3]]).astype("float32")
label_data = np.array([[1.7, 1], [0.4, 0.5]]).astype("float32")
x = paddle.to_variable(x_data)
label = paddle.to_variable(label_data)
l1_loss = paddle.nn.loss.L1Loss()
output = l1_loss(x, label)
print(output.numpy())
# [0.35]
l1_loss = paddle.nn.loss.L1Loss(reduction='sum')
output = l1_loss(x, label)
print(output.numpy())
# [1.4]
l1_loss = paddle.nn.loss.L1Loss(reduction='none')
output = l1_loss(x, label)
print(output.numpy())
# [[0.20000005 0.19999999]
# [0.2 0.79999995]]
MSELoss
-------------------------------
.. py:function:: paddle.nn.loss.MSELoss(input,label)
.. py:function:: paddle.nn.loss.MSELoss(reduction='mean')
该OP用于计算预测值和目标值的均方差误差。
......@@ -23,13 +23,15 @@ MSELoss
Out = \operatorname{sum}((input - label)^2)
参数:
- **input** (Variable) - 预测值,维度为 :math:`[N_1, N_2, ..., N_k, D]` 的多维Tensor,其中最后一维D是类别数目。数据类型为float32或float64。
- **label** (Variable) - 目标值,维度为 :math:`[N_1, N_2, ..., N_k, D]` 的多维Tensor,其中最后一维D是类别数目。数据类型为float32或float64。
- **reduction** (str, 可选) - 约简方式,可以是 'none' | 'mean' | 'sum'。设为'none'时不使用约简,设为'mean'时返回loss的均值,设为'sum'时返回loss的和。
返回:预测值和目标值的均方差
形状:
- **input** (Tensor) - 预测值,维度为 :math:`[N_1, N_2, ..., N_k]` 的多维Tensor。数据类型为float32或float64。
- **label** (Tensor) - 目标值,维度为 :math:`[N_1, N_2, ..., N_k]` 的多维Tensor。数据类型为float32或float64。
返回:变量(Tensor), 预测值和目标值的均方差, 数值类型与输入相同
返回类型:变量(Variable)
**代码示例**:
......@@ -37,32 +39,32 @@ MSELoss
import numpy as np
import paddle
from paddle import fluid
import paddle.fluid.dygraph as dg
# static graph mode
paddle.enable_static()
mse_loss = paddle.nn.loss.MSELoss()
input = fluid.data(name="input", shape=[1])
label = fluid.data(name="label", shape=[1])
place = fluid.CPUPlace()
input = paddle.data(name="input", shape=[1])
label = paddle.data(name="label", shape=[1])
place = paddle.CPUPlace()
input_data = np.array([1.5]).astype("float32")
label_data = np.array([1.7]).astype("float32")
# declarative mode
output = mse_loss(input,label)
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
exe = paddle.static.Executor(place)
exe.run(paddle.static.default_startup_program())
output_data = exe.run(
fluid.default_main_program(),
paddle.static.default_main_program(),
feed={"input":input_data, "label":label_data},
fetch_list=[output],
return_numpy=True)
print(output_data)
# [array([0.04000002], dtype=float32)]
# imperative mode
with dg.guard(place) as g:
input = dg.to_variable(input_data)
label = dg.to_variable(label_data)
# dynamic graph mode
paddle.disable_static()
input = paddle.to_variable(input_data)
label = paddle.to_variable(label_data)
output = mse_loss(input, label)
print(output.numpy())
# [0.04000002]
.. _cn_api_nn_loss_MarginRankingLoss:
MarginRankingLoss
-------------------------------
.. py:class:: paddle.nn.loss.MarginRankingLoss(margin=0.0, reduction='mean', name=None)
该接口用于创建一个 ``MarginRankingLoss`` 的可调用类,计算输入input,other 和 标签label间的 `margin rank loss` 损失。
该损失函数的数学计算公式如下:
.. math::
margin\_rank\_loss = max(0, -label * (input - other) + margin)
当 `reduction` 设置为 ``'mean'`` 时,
.. math::
Out = MEAN(margin\_rank\_loss)
当 `reduction` 设置为 ``'sum'`` 时,
.. math::
Out = SUM(margin\_rank\_loss)
当 `reduction` 设置为 ``'none'`` 时,直接返回最原始的 `margin_rank_loss` 。
参数
::::::::
- **margin** (float,可选): - 用于加和的margin值,默认值为0。
- **reduction** (string,可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'`` 、 ``'mean'`` 、 ``'sum'`` 。如果设置为 ``'none'`` ,则直接返回 最原始的 ``margin_rank_loss`` 。如果设置为 ``'sum'`` ,则返回 ``margin_rank_loss`` 的总和。如果设置为 ``'mean'`` ,则返回 ``margin_rank_loss`` 的平均值。默认值为 ``'none'`` 。
- **name** (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
形状
::::::::
- **input** - N-D Tensor, 维度是[N,*] 其中N 是batch size,`*` 是任意数量的额外维度,数据类型为float32或float64。
- **other** - 与 ``input`` 的形状、数据类型相同。
- **label** - 与 ``input`` 的形状、数据类型相同。
- **output** - 如果 :attr:`reduction` 为 ``'sum'`` 或者是 ``'mean'`` ,则形状为 :math:`[1]` ,否则shape和输入 `input` 保持一致 。数据类型与 ``input``、 ``other`` 相同。
返回
::::::::
返回计算MarginRankingLoss的可调用对象。
代码示例
::::::::
.. code-block:: python
import numpy as np
import paddle
paddle.disable_static()
input = paddle.to_variable(np.array([[1, 2], [3, 4]]).astype("float32"))
other = paddle.to_variable(np.array([[2, 1], [2, 4]]).astype("float32"))
label = paddle.to_variable(np.array([[1, -1], [-1, -1]]).astype("float32"))
margin_rank_loss = paddle.nn.MarginRankingLoss()
loss = margin_rank_loss(input, other, label)
print(loss.numpy()) # [0.75]
.. _cn_api_nn_loss_NLLLoss:
NLLLoss
-------------------------------
.. py:function:: paddle.nn.loss.NLLLoss(weight=None, reduction='mean', ignore_index=-100)
.. py:class:: paddle.nn.loss.NLLLoss(weight=None, ignore_index=-100, reduction='mean', name=None)
OP计算输入input和标签label间的 `negative log likelihood loss` 损失 ,可用于训练一个 `n` 类分类器。
接口可创建一个NLLLoss可调用类,计算输入x和标签label间的 `negative log likelihood loss` 损失 ,可用于训练一个 `n` 类分类器。
如果提供 `weight` 参数的话,它是一个 `1-D` 的tensor, 里面的值对应类别的权重。当你的训练集样本
不均衡的话,使用这个参数是非常有用的。
......@@ -28,48 +30,41 @@ NLLLoss
\text{if reduction} = \text{'sum'.}
\end{cases}
参数:
- **input** (Variable): - 输入 `Tensor`, 其形状为 :math:`[N, C]` , 其中 `C` 为类别数。但是对于多维度的情形下,它的形状为 :math:`[N, C, d_1, d_2, ..., d_K]` 。数据类型为float32或float64。
- **label** (Variable): - 输入input对应的标签值。其形状为 :math:`[N,]` 或者 :math:`[N, d_1, d_2, ..., d_K]`, 数据类型为int64。
- **weight** (Variable, 可选): - 手动指定每个类别的权重。其默认为 `None` 。如果提供该参数的话,长度必须为 `num_classes` 。数据类型为float32或float64。
- **reduction** (string, 可选): - 指定应用于输出结果的计算方式,可选值有: `none`, `mean`, `sum` 。默认为 `mean` ,计算 `mini-batch` loss均值。设置为 `sum` 时,计算 `mini-batch` loss的总和。设置为 `none` 时,则返回loss Tensor。数据类型为string。
参数
:::::::::
- **weight** (Tensor, 可选): - 手动指定每个类别的权重。其默认为 `None` 。如果提供该参数的话,长度必须为 `num_classes` 。数据类型为float32或float64。
- **ignore_index** (int64, 可选): - 指定一个忽略的标签值,此标签值不参与计算。默认值为-100。数据类型为int64。
- **reduction** (str, 可选): - 指定应用于输出结果的计算方式,可选值有: `none`, `mean`, `sum` 。默认为 `mean` ,计算 `mini-batch` loss均值。设置为 `sum` 时,计算 `mini-batch` loss的总和。设置为 `none` 时,则返回loss Tensor。数据类型为string。
- **name** (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name` 。
返回:返回存储表示 `negative log likihood loss` 的损失值。
返回类型:Variable
形状
:::::::::
- **input** (Tensor): - 输入 `Tensor`, 其形状为 :math:`[N, C]` , 其中 `C` 为类别数。但是对于多维度的情形下,它的形状为 :math:`[N, C, d_1, d_2, ..., d_K]` 。数据类型为float32或float64。
- **label** (Tensor): - 输入 `input` 对应的标签值。其形状为 :math:`[N,]` 或者 :math:`[N, d_1, d_2, ..., d_K]`, 数据类型为int64。
- **output** (Tensor): - 输入 `input` 和 `label` 间的 `negative log likelihood loss` 损失。如果 `reduction` 为 `'none'` ,则输出Loss形状为 `[N, *]` 。 如果 `reduction` 为 `'sum'` 或者 `'mean'` ,则输出Loss形状为 `'[1]'` 。
**代码示例**
代码示例
:::::::::
.. code-block:: python
# declarative mode
import paddle.fluid as fluid
import numpy as np
import paddle
input_np = np.random.random(size=(10, 10)).astype(np.float32)
label_np = np.random.randint(0, 10, size=(10,)).astype(np.int64)
prog = fluid.Program()
startup_prog = fluid.Program()
place = fluid.CPUPlace()
with fluid.program_guard(prog, startup_prog):
input = fluid.data(name='input', shape=[10, 10], dtype='float32')
label = fluid.data(name='label', shape=[10], dtype='int64')
nll_loss = paddle.nn.loss.NLLLoss()
res = nll_loss(input, label)
exe = fluid.Executor(place)
static_result = exe.run(
prog,
feed={"input": input_np,
"label": label_np},
fetch_list=[res])
print(static_result)
# imperative mode
import paddle.fluid.dygraph as dg
with dg.guard(place) as g:
input = dg.to_variable(input_np)
label = dg.to_variable(label_np)
output = nll_loss(input, label)
print(output.numpy())
import numpy as np
nll_loss = paddle.nn.layer.NLLLoss()
log_softmax = paddle.nn.LogSoftmax(axis=1)
input_np = np.array([[0.88103855, 0.9908683 , 0.6226845 ],
[0.53331435, 0.07999352, 0.8549948 ],
[0.25879037, 0.39530203, 0.698465 ],
[0.73427284, 0.63575995, 0.18827209],
[0.05689114, 0.0862954 , 0.6325046 ]]).astype(np.float32)
label_np = np.array([0, 2, 1, 1, 0]).astype(np.int64)
place = paddle.CPUPlace()
paddle.disable_static(place)
input = paddle.to_variable(input_np)
log_out = log_softmax(input)
label = paddle.to_variable(label_np)
result = nll_loss(log_out, label)
print(result.numpy()) # [1.0720209]
.. _cn_api_nn_cn_margin_rank_loss:
margin_rank_loss
-------------------------------
:doc_source: paddle.fluid.layers.margin_rank_loss
.. _cn_api_nn_cn_matrix_nms:
matrix_nms
-------------------------------
:doc_source: paddle.fluid.layers.matrix_nms
......@@ -2,6 +2,118 @@
softmax
-------------------------------
:doc_source: paddle.fluid.layers.softmax
.. py:class:: paddle.nn.functional.softmax(x, axis=-1, name=None)
该OP实现了softmax层。OP的计算过程如下:
步骤1:输入 ``x`` 的 ``axis`` 维会被置换到最后一维;
步骤2:将输入 ``x`` 在逻辑上变换为二维矩阵。二维矩阵第一维(列长度)是输入除最后一维之外的其他维度值的乘积,第二维(行长度)和输入 ``axis`` 维的长度相同;对于矩阵的每一行,softmax操作对其进行重新缩放,使得该行的每个元素在 \[0,1\] 范围内,并且总和为1;
步骤3:softmax操作执行完成后,执行步骤1和步骤2的逆运算,将二维矩阵恢复至和输入 ``x`` 相同的维度。
上述步骤2中softmax操作计算过程如下:
- 对于二维矩阵的每一行,计算K维向量(K是输入第 ``axis`` 维的长度)中指定位置的指数值和全部位置指数值的和。
- 指定位置指数值与全部位置指数值之和的比值就是softmax操作的输出。
对于二维矩阵中的第i行和第j列有:
.. math::
Out[i,j] = \frac{exp(X[i,j])}{\sum_j exp(X[i,j])}
- 示例1(矩阵一共有三维。axis = -1,表示沿着最后一维(即第三维)做softmax操作)
.. code-block:: python
输入
x.shape = [2, 3, 4]
x.data = [[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]]
axis = -1
输出
out.shape = [2, 3, 4]
out.data = [[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.07232949, 0.19661193, 0.19661193, 0.53444665]],
[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.0320586 , 0.08714432, 0.23688282, 0.64391426]]]
- 示例2(矩阵一共有三维。axis = 1,表示沿着第二维做softmax操作)
.. code-block:: python
输入
x.shape = [2, 3, 4]
x.data = [[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]]
axis = 1
输出
out.shape = [2, 3, 4]
out.data = [[[0.00657326, 0.00657326, 0.01714783, 0.01714783],
[0.01786798, 0.01786798, 0.04661262, 0.04661262],
[0.97555875, 0.97555875, 0.93623955, 0.93623955]],
[[0.00490169, 0.00490169, 0.00490169, 0.00490169],
[0.26762315, 0.26762315, 0.26762315, 0.26762315],
[0.72747516, 0.72747516, 0.72747516, 0.72747516]]]
参数
::::::::::
- x (Tensor) - 输入的多维 ``Tensor`` ,数据类型为:float32、float64。
- axis (int, 可选) - 指定对输入 ``x`` 进行运算的轴。``axis`` 的有效范围是[-D, D),D是输入 ``x`` 的维度, ``axis`` 为负值时与 :math:`axis + D` 等价。默认值为-1。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回
::::::::::
``Tensor`` ,数据类型和形状同 ``x`` 一致。
代码示例
::::::::::
.. code-block:: python
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.enable_imperative()
x = np.array([[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]], 'float32')
x = paddle.imperative.to_variable(x)
out = F.softmax(x)
# [[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.07232949, 0.19661193, 0.19661193, 0.53444665]],
# [[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.0320586 , 0.08714432, 0.23688282, 0.64391426]]]
......@@ -31,7 +31,6 @@ paddle.optimizer
optimizer_cn/ModelAverage_cn.rst
optimizer_cn/Momentum_cn.rst
optimizer_cn/MomentumOptimizer_cn.rst
optimizer_cn/PipelineOptimizer_cn.rst
optimizer_cn/RecomputeOptimizer_cn.rst
optimizer_cn/RMSPropOptimizer_cn.rst
optimizer_cn/SGD_cn.rst
......
......@@ -101,6 +101,49 @@ Adadelta优化器,具体细节可参考论文 `ADADELTA: AN ADAPTIVE LEARNING
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -120,6 +120,49 @@ Adaptive Gradient 优化器(自适应梯度优化器,简称Adagrad)可以针
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -19,7 +19,7 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
.. math::
moment\_2\_out=\beta_2∗moment\_2+(1−\beta_2)∗grad*grad
.. math::
learning\_rate=\frac{learning\_rate}{1-\beta_1^t}
learning\_rate=learning\_rate*\frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t}
.. math::
param\_out=param-learning\_rate*\frac{moment\_1}{\sqrt{moment\_2}+\epsilon}\\
......@@ -84,7 +84,7 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
avg_cost = fluid.layers.mean(cost)
# define beta decay variable
def get_decayed_betas(beta1_init, beta2_init, decay_steps, decay_rate)
def get_decayed_betas(beta1_init, beta2_init, decay_steps, decay_rate):
global_step = lr_scheduler._decay_step_counter()
beta1 = fluid.layers.create_global_var(
......@@ -113,7 +113,7 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
beta1, beta2 = get_decayed_betas(0.9, 0.99, 1e5, 0.9)
adam_optimizer = fluid.optimizer.AdamOptimizer(
learning_rate=0.01,
beta1=beta1
beta1=beta1,
beta2=beta2)
adam_optimizer.minimize(avg_cost)
......@@ -194,6 +194,49 @@ Adam优化器出自 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 的第二节
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -134,6 +134,49 @@ Adamax优化器是参考 `Adam论文 <https://arxiv.org/abs/1412.6980>`_ 第7节
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -73,6 +73,29 @@ DGC还使用动量因子掩藏(momentum factor masking)和预训练(warm-u
.. code-block:: python
import paddle.fluid as fluid
def network():
x = fluid.layers.data(name='x', shape=[1], dtype='int64', lod_level=0)
y = fluid.layers.data(name='y', shape=[1], dtype='int64', lod_level=0)
emb_x = fluid.layers.embedding(
input=x,
size=[10, 2],
is_sparse=False)
emb_y = fluid.layers.embedding(
input=y,
size=[10, 2],
is_sparse=False)
concat = fluid.layers.concat([emb_x, emb_y], axis=1)
fc = fluid.layers.fc(input=concat,
name="fc",
size=1,
num_flatten_dims=1,
bias_attr=False)
loss = fluid.layers.reduce_mean(fc)
return loss
loss = network()
optimizer = fluid.optimizer.SGD(learning_rate=0.1)
params_grads = optimizer.backward(loss)
......
......@@ -114,6 +114,49 @@ Decayed Adagrad优化器,可以看做是引入了衰减率的 `Adagrad <http:/
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -125,6 +125,48 @@ FTRL 原始论文: ( `https://www.eecs.tufts.edu/~dsculley/papers/ad-click-predi
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -131,6 +131,49 @@ Deep Learning: Training BERT in 76 minutes <https://arxiv.org/pdf/1904.00962.pdf
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -38,6 +38,7 @@ LarsMomentumOptimizer
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
np_inp = np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)
inp = fluid.layers.data(
......@@ -100,6 +101,49 @@ LarsMomentumOptimizer
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -29,7 +29,7 @@ LookaheadOptimizer
import paddle
import paddle.fluid as fluid
import numpy as np
import numpy.random as random
x = fluid.layers.data(name='x', shape=[2], dtype='float32')
label = fluid.layers.data(name="label", shape=[1], dtype="int64")
......@@ -46,11 +46,14 @@ LookaheadOptimizer
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
def train_reader(limit=5):
for i in range(limit):
yield random.random([2]).astype('float32'), random.random([1]).astype('int64')
feeder = fluid.DataFeeder(feed_list=[x, label], place=place)
reader = paddle.batch(paddle.reader.shuffle(train_reader, buf_size=50000),batch_size=1)
step = 0
while(step < 10):
step += 1
for batch_data in reader():
exe.run(fluid.default_main_program(),
feed=feeder.feed(batch_data))
......@@ -134,6 +134,50 @@ MomentumOptimizer
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
**注意:**
......
.. _cn_api_fluid_optimizer_PipelineOptimizer:
PipelineOptimizer
-------------------------------
.. py:class:: paddle.fluid.optimizer.PipelineOptimizer(optimizer, cut_list=None, place_list=None, concurrency_list=None, queue_size=30, sync_steps=1, start_cpu_core_id=0)
:api_attr: 声明式编程模式(静态图)
使用流水线模式进行训练。
Program会根据切分列表cut_list进行分割。如果cut_list的长度是k,则整个program(包括反向部分)将被分割为2*k-1个section。 所以place_list和concurrency_list的长度也必须是2*k-1。
.. note::
虽然我们在流水线训练模式中采用异步更新的方式来加速,但最终的效果会依赖于每条流水线的训练进程。我们将在未来尝试同步模式。
参数:
- **optimizer** (Optimizer) - 基础优化器,如SGD
- **cut_list** (list of Variable list) - main_program的cut变量列表
- **place_list** (list of Place) - 对应section运行所在的place
- **concurrency_list** (list of int) - 指定每个section的并发度列表
- **queue_size** (int) - 每个section都会消费其输入队列(in-scope queue)中的scope,并向输出队列(out-scope queue)产出scope。 此参数的作用就是指定队列的大小。 可选,默认值:30
- **sync_steps** (int) - 不同显卡之间的同步周期数。可选,默认值:1
- **start_cpu_core_id** (int) - 指定所使用的第一个CPU核的id。可选,默认值:0
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
x = fluid.layers.data(name='x', shape=[1], dtype='int64', lod_level=0)
y = fluid.layers.data(name='y', shape=[1], dtype='int64', lod_level=0)
emb_x = layers.embedding(input=x, param_attr=fluid.ParamAttr(name="embx"), size=[10,2], is_sparse=False)
emb_y = layers.embedding(input=y, param_attr=fluid.ParamAttr(name="emby",learning_rate=0.9), size=[10,2], is_sparse=False)
concat = layers.concat([emb_x, emb_y], axis=1)
fc = layers.fc(input=concat, name="fc", size=1, num_flatten_dims=1, bias_attr=False)
loss = layers.reduce_mean(fc)
optimizer = fluid.optimizer.SGD(learning_rate=0.5)
optimizer = fluid.optimizer.PipelineOptimizer(optimizer,
cut_list=[[emb_x, emb_y], [loss]],
place_list=[fluid.CPUPlace(), fluid.CUDAPlace(0), fluid.CPUPlace()],
concurrency_list=[1, 1, 4],
queue_size=2,
sync_steps=1,
)
optimizer.minimize(loss)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
filelist = [] # you should set your own filelist, e.g. filelist = ["dataA.txt"]
dataset = fluid.DatasetFactory().create_dataset("FileInstantDataset")
dataset.set_use_var([x,y])
dataset.set_batch_size(batch_size)
dataset.set_filelist(filelist)
exe.train_from_dataset(
fluid.default_main_program(),
dataset,
thread=2,
debug=False,
fetch_list=[],
fetch_info=[],
print_period=1)
......@@ -151,6 +151,48 @@ RMSPropOptimizer
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -127,6 +127,48 @@ SGDOptimizer
optimizer.minimize(out)
optimizer.clear_gradients()
.. py:method:: set_lr()
**注意:**
**1. 该API只在** `Dygraph <../../user_guides/howto/dygraph/DyGraph.html>`_ **模式下生效**
手动设置当前 ``optimizer`` 的学习率。当使用LearningRateDecay时,无法使用该API手动设置学习率,因为这将导致冲突。
参数:
value (float|Variable) - 需要设置的学习率的值。
返回:无
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
with fluid.dygraph.guard():
linear = fluid.dygraph.nn.Linear(10, 10)
adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())
# 通过Python float数值手动设置学习率
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
adam.set_lr(lr_list[i])
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.2
# current lr is 0.3
# current lr is 0.4
# current lr is 0.5
# current lr is 0.6
# 通过 框架的Variable 设置学习率
lr_var = fluid.layers.create_global_var(shape=[1], value=0.7, dtype='float32')
adam.set_lr(lr_var)
print("current lr is {}".format(adam.current_step_lr()))
# 打印结果:
# current lr is 0.7
.. py:method:: current_step_lr()
......
......@@ -44,8 +44,6 @@ paddle
paddle_cn/elementwise_add_cn.rst
paddle_cn/elementwise_div_cn.rst
paddle_cn/elementwise_floordiv_cn.rst
paddle_cn/elementwise_max_cn.rst
paddle_cn/elementwise_min_cn.rst
paddle_cn/elementwise_mod_cn.rst
paddle_cn/elementwise_mul_cn.rst
paddle_cn/elementwise_pow_cn.rst
......@@ -95,11 +93,16 @@ paddle
paddle_cn/log_cn.rst
paddle_cn/manual_seed_cn.rst
paddle_cn/matmul_cn.rst
paddle_cn/max_cn.rst
paddle_cn/maximum_cn.rst
paddle_cn/mean_cn.rst
paddle_cn/meshgrid_cn.rst
paddle_cn/min_cn.rst
paddle_cn/minimum_cn.rst
paddle_cn/multiplex_cn.rst
paddle_cn/mul_cn.rst
paddle_cn/name_scope_cn.rst
paddle_cn/no_grad_cn.rst
paddle_cn/nonzero_cn.rst
paddle_cn/not_equal_cn.rst
paddle_cn/ones_cn.rst
......
......@@ -2,6 +2,6 @@
ExecutionStrategy
-------------------------------
:doc_source: paddle.framework.ExecutionStrategy
:doc_source: paddle.fluid.ExecutionStrategy
......@@ -2,6 +2,6 @@
argsort
-------------------------------
:doc_source: paddle.fluid.layers.argsort
:doc_source: paddle.tensor.argsort
......@@ -2,6 +2,6 @@
cumsum
-------------------------------
:doc_source: paddle.fluid.layers.cumsum
:doc_source: paddle.tensor.cumsum
.. _cn_api_paddle_cn_elementwise_equal:
elementwise_equal
-------------------------------
:doc_source: paddle.fluid.layers.equal
.. _cn_api_paddle_cn_elementwise_max:
elementwise_max
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_max
.. _cn_api_paddle_cn_elementwise_min:
elementwise_min
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_min
.. _cn_api_paddle_cn_equal_all:
equal_all
-------------------------------
:doc_source: paddle.tensor.equal_all
......@@ -2,6 +2,6 @@
greater_equal
-------------------------------
:doc_source: paddle.fluid.layers.greater_equal
:doc_source: paddle.tensor.greater_equal
......@@ -2,6 +2,6 @@
greater_than
-------------------------------
:doc_source: paddle.fluid.layers.greater_than
:doc_source: paddle.tensor.greater_than
......@@ -2,6 +2,6 @@
less_equal
-------------------------------
:doc_source: paddle.fluid.layers.less_equal
:doc_source: paddle.tensor.less_equal
......@@ -2,6 +2,6 @@
less_than
-------------------------------
:doc_source: paddle.fluid.layers.less_than
:doc_source: paddle.tensor.less_than
.. _cn_api_paddle_cn_name_scope:
name_scope
-------------------------------
:doc_source: paddle.fluid.dygraph.no_grad
......@@ -2,6 +2,6 @@
not_equal
-------------------------------
:doc_source: paddle.fluid.layers.not_equal
:doc_source: paddle.tensor.not_equal
......@@ -2,6 +2,6 @@
sort
-------------------------------
:doc_source: paddle.fluid.layers.argsort
:doc_source: paddle.tensor.sort
......@@ -38,16 +38,14 @@ paddle.tensor
tensor_cn/einsum_cn.rst
tensor_cn/elementwise_add_cn.rst
tensor_cn/elementwise_div_cn.rst
tensor_cn/elementwise_equal_cn.rst
tensor_cn/elementwise_floordiv_cn.rst
tensor_cn/elementwise_max_cn.rst
tensor_cn/elementwise_min_cn.rst
tensor_cn/elementwise_mod_cn.rst
tensor_cn/elementwise_mul_cn.rst
tensor_cn/elementwise_pow_cn.rst
tensor_cn/elementwise_sub_cn.rst
tensor_cn/elementwise_sum_cn.rst
tensor_cn/equal_cn.rst
tensor_cn/equal_all_cn.rst
tensor_cn/erf_cn.rst
tensor_cn/exp_cn.rst
tensor_cn/expand_as_cn.rst
......@@ -65,6 +63,7 @@ paddle.tensor
tensor_cn/greater_than_cn.rst
tensor_cn/has_inf_cn.rst
tensor_cn/has_nan_cn.rst
tensor_cn/histogram_cn.rst
tensor_cn/increment_cn.rst
tensor_cn/index_sample_cn.rst
tensor_cn/index_select_cn.rst
......@@ -88,9 +87,11 @@ paddle.tensor
tensor_cn/math_cn.rst
tensor_cn/matmul_cn.rst
tensor_cn/max_cn.rst
tensor_cn/maximum_cn.rst
tensor_cn/mean_cn.rst
tensor_cn/meshgrid_cn.rst
tensor_cn/min_cn.rst
tensor_cn/minimum_cn.rst
tensor_cn/mm_cn.rst
tensor_cn/mul_cn.rst
tensor_cn/multiplex_cn.rst
......
......@@ -3,33 +3,53 @@
arange
-------------------------------
.. py:function:: paddle.tensor.arange(start, end, step=1, dtype=None, name=None)
.. py:function:: paddle.arange(start=0, end=None, step=1, dtype=None, name=None)
:alias_main: paddle.arange
:alias: paddle.arange,paddle.tensor.arange,paddle.tensor.creation.arange
:update_api: paddle.fluid.layers.range
:alias: paddle.tensor.arange, paddle.tensor.creation.arange
API根据step均匀分隔给定数值区间[start, end),并返回该分隔结果
OP返回以步长 ``step`` 均匀分隔给定数值区间[``start``, ``end``)的1-D Tensor,数据类型为 ``dtype``
**参数**:
- **start** (float32 | float64 | int32 | int64 | Variable) - 区间起点,且区间包括此值, 当类型是Variable时,是shape为 [1] 的1-D Tensor。
- **end** (float32 | float64 | int32 | int64 | Variable) - 区间终点,通常区间不包括此值。但当step不是整数,且浮点数取整会影响输出的长度时例外。
- **step** (float32 | float64 | int32 | int64 | Variable) - 均匀分割的步长。
- **dtype** (str | core.VarDesc.VarType) - 输出Tensor的数据类型,可为 'float32', 'float64', 'int32', 'int64' 。
当 ``dtype`` 表示浮点类型时,为了避免浮点计算误差,建议给 ``end`` 加上一个极小值epsilon,使边界可以更加明确。
**返回**:均匀分割给定数值区间后得到的1-D Tensor, 数据类型为输入 dtype 。
参数
::::::::::
- **start** (float|int|Tensor) - 区间起点(且区间包括此值)。当 ``start`` 类型是Tensor时,是形状为[1]且数据类型为int32、int64、float32、float64的Tensor。如果仅指定 ``start`` ,而 ``end`` 为None,则区间为[0, ``start``)。默认值为0。
- **end** (float|int|Tensor, 可选) - 区间终点(且通常区间不包括此值)。当 ``end`` 类型是Tensor时,是形状为[1]且数据类型为int32、int64、float32、float64的Tensor。默认值为None。
- **step** (float|int|Tensor, 可选) - 均匀分割的步长。当 ``step`` 类型是Tensor时,是形状为[1]且数据类型为int32、int64、float32、float64的Tensor。默认值为1。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持int32、int64、float32、float64。当该参数值为None时, 输出Tensor的数据类型为int64。默认值为None.
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
**返回类型**:Variable
返回
::::::::::
Tensor: 以步长 ``step`` 均匀分割给定数值区间[``start``, ``end``)后得到的1-D Tensor, 数据类型为 ``dtype`` 。
**代码示例**
抛出异常
::::::::::
- ``TypeError`` - 如果 ``dtype`` 不是int32、int64、float32、float64。
代码示例
::::::::::
.. code-block:: python
import paddle
import paddle.fluid as fluid
with fluid.dygraph.guard():
x = paddle.arange(0, 6, 2)
# x: [0, 2, 4]
# x dtype: float32
import numpy as np
paddle.enable_imperative()
out1 = paddle.arange(5)
# [0, 1, 2, 3, 4]
out2 = paddle.arange(3, 9, 2.0)
# [3, 5, 7]
# use 4.999 instead of 5.0 to avoid floating point rounding errors
out3 = paddle.arange(4.999, dtype='float32')
# [0., 1., 2., 3., 4.]
start_var = paddle.imperative.to_variable(np.array([3]))
out4 = paddle.arange(start_var, 7)
# [3, 4, 5, 6]
......@@ -2,6 +2,61 @@
argsort
-------------------------------
:doc_source: paddle.fluid.layers.argsort
.. py:function:: paddle.argsort(x, axis=-1, descending=False, name=None)
:alias_main: paddle.argsort
:alias: paddle.argsort,paddle.tensor.argsort,paddle.tensor.search.argsort
对输入变量沿给定轴进行排序,输出排序好的数据的相应索引,其维度和输入相同。默认升序排列,如果需要降序排列设置 ``descending=True`` 。
参数:
- **x** (Tensor) - 输入的多维 ``Tensor`` ,支持的数据类型:float32、float64、int16、int32、int64、uint8。
- **axis** (int,可选) - 指定对输入Tensor进行运算的轴, ``axis`` 的有效范围是[-R, R),R是输入 ``x`` 的Rank, ``axis`` 为负时与 ``axis`` +R 等价。默认值为0。
- **descending** (bool,可选) - 指定算法排序的方向。如果设置为True,算法按照降序排序。如果设置为False或者不设置,按照升序排序。默认值为False。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:Tensor, 排序后索引信息(与 ``x`` 维度信息一致),数据类型为int64。
**代码示例**:
.. code-block:: python
import paddle
import paddle.imperative as imperative
import numpy as np
paddle.enable_imperative()
input_array = np.array([[[5,8,9,5],
[0,0,1,7],
[6,9,2,4]],
[[5,2,4,2],
[4,7,7,9],
[1,7,0,6]]]).astype(np.float32)
x = imperative.to_variable(input_array)
out1 = paddle.argsort(x=x, axis=-1)
out2 = paddle.argsort(x=x, axis=0)
out3 = paddle.argsort(x=x, axis=1)
print(out1.numpy())
#[[[0 3 1 2]
# [0 1 2 3]
# [2 3 0 1]]
# [[1 3 2 0]
# [0 1 2 3]
# [2 0 3 1]]]
print(out2.numpy())
#[[[0 1 1 1]
# [0 0 0 0]
# [1 1 1 0]]
# [[1 0 0 0]
# [1 1 1 1]
# [0 0 0 1]]]
print(out3.numpy())
#[[[1 1 1 2]
# [0 0 2 0]
# [2 2 0 1]]
# [[2 0 2 0]
# [1 1 0 2]
# [0 2 1 1]]]
.. _cn_api_tensor_cholesky:
cholesky
-------------------------------
**版本升级,文档正在开发中**
.. py:function:: paddle.cholesky(x, upper=False, name=None)
:alias_main: paddle.cholesky
:alias: paddle.cholesky, paddle.tensor.cholesky, paddle.tensor.linalg.cholesky
计算一个对称正定矩阵或一批对称正定矩阵的Cholesky分解。如果 `upper` 是 `True` ,
则分解形式为 :math:`A = U ^ {T} U` , 返回的矩阵U是上三角矩阵。
否则,分解形式为 :math:`A = LL ^ {T}` ,并返回矩阵 :math:`L` 是下三角矩阵。
参数:
- **x** (Variable)- 输入变量为多维Tensor,它的维度应该为 `[*, M, N]` ,其中*为零或更大的批次尺寸,并且最里面的两个维度上的矩阵都应为对称的正定矩阵,支持数据类型为float32,float64。
- **upper** (bool)- 指示是否返回上三角矩阵或下三角矩阵。默认值:False。
- **name** (str , 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回: 与 `x` 具有相同形状和数据类型的Tensor。它代表了Cholesky分解生成的三角矩阵。
返回类型: 变量(Variable)
**代码示例**
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
a = np.random.rand(3, 3)
a_t = np.transpose(a, [1, 0])
x_data = np.matmul(a, a_t) + 1e-03
x = paddle.imperative.to_variable(x_data)
out = paddle.cholesky(x, upper=False)
print(out.numpy())
# [[1.190523 0. 0. ]
# [0.9906703 0.27676893 0. ]
# [1.25450498 0.05600871 0.06400121]]
.. _cn_api_tensor_concat:
concat
-------------------------------
**版本升级,文档正在开发中**
.. py:function:: paddle.tensor.concat(x, axis=0, name=None)
该OP对输入沿 ``axis`` 轴进行联结,返回一个新的Tensor。
参数:
- **x** (list|tuple) - 待联结的Tensor list或者Tensor tuple ,支持的数据类型为:bool, float16, float32、float64、int32、int64, ``x`` 中所有Tensor的数据类型应该一致。
- **axis** (int|Tensor,可选) - 指定对输入 ``x`` 进行运算的轴,可以是整数或者形状为[1]的Tensor,数据类型为int32或者int64。 ``axis`` 的有效范围是[-R, R),R是输入 ``x`` 中Tensor的维度, ``axis`` 为负值时与 :math:`axis + R` 等价。默认值为0。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:联结后的Tensor ,数据类型和 ``x`` 中的Tensor相同。
抛出异常:
- ``TypeError``: - 当输入 ``x`` 的类型不是list或者tuple时。
- ``TypeError``: - 当输入 ``x`` 的数据类型不是 bool,float16, float32, float64, int32, int64时。
- ``TypeError``: - 当 ``axis`` 的类型不是int或者Tensor时。 当 ``axis`` 是Tensor的时候其数据类型不是int32或者int64时。
- ``TypeError``: - 当输入 ``x`` 中的Tensor存在数据类型不一致时。
**代码示例**:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative() # Now we are in imperative mode
in1 = np.array([[1, 2, 3],
[4, 5, 6]])
in2 = np.array([[11, 12, 13],
[14, 15, 16]])
in3 = np.array([[21, 22],
[23, 24]])
x1 = paddle.imperative.to_variable(in1)
x2 = paddle.imperative.to_variable(in2)
x3 = paddle.imperative.to_variable(in3)
zero = paddle.full(shape=[1], dtype='int32', fill_value=0)
# When the axis is negative, the real axis is (axis + Rank(x))
# As follow, axis is -1, Rank(x) is 2, the real axis is 1
out1 = paddle.concat(x=[x1, x2, x3], axis=-1)
out2 = paddle.concat(x=[x1, x2], axis=0)
out3 = paddle.concat(x=[x1, x2], axis=zero)
# out1
# [[ 1 2 3 11 12 13 21 22]
# [ 4 5 6 14 15 16 23 24]]
# out2 out3
# [[ 1 2 3]
# [ 4 5 6]
# [11 12 13]
# [14 15 16]]
......@@ -3,50 +3,54 @@
cross
-------------------------------
.. py:function:: paddle.cross(input, other, dim=None)
.. py:function:: paddle.cross(x, y, axis=None, name=None)
:alias_main: paddle.cross
:alias: paddle.cross,paddle.tensor.cross,paddle.tensor.linalg.cross
该OP返回在 ``dim`` 维度上,两个张量 ``input`` 和 ``other`` 的向量积(叉积)。 ``input`` 和 ``other`` 必须有相同的形状,
且指定的 ``dim`` 维上 ``size`` 必须为3,如果 ``dim`` 未指定,默认选取第一个 ``size`` 等于3的维度。
计算张量 ``x`` 和 ``y`` 在 ``axis`` 维度上的向量积(叉积)。 ``x`` 和 ``y`` 必须有相同的形状,
且指定的 ``axis`` 的长度必须为3. 如果未指定 ``axis`` ,默认选取第一个长度为3的 ``axis`` .
**参数**:
- **input** (Variable)– 第一个输入张量。
- **other** (Variable)– 第二个输入张量。
- **dim** (int, optional) – 沿着此维进行叉积操作,若未指定,则默认选取第一个 ``size`` 等于3的维度
- **x** (Variable)– 第一个输入张量。
- **y** (Variable)– 第二个输入张量。
- **axis** (int, optional) – 沿着此维进行向量积操作。默认选取第一个长度为3的 ``axis`` .
- **name** (str,可选)- 输出的名字。默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
**返回**:
- **Variable** ,数据类型同输入。
**返回**:向量积的结果。
**返回类型**:Variable
**代码示例**:
.. code-block:: python
import paddle
import paddle.fluid as fluid
from paddle.imperative import to_variable
import numpy as np
paddle.enable_imperative()
data_x = np.array([[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0],
[3.0, 3.0, 3.0]])
data_y = np.array([[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0]])
x = to_variable(data_x)
y = to_variable(data_y)
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(data_x)
y = fluid.dygraph.to_variable(data_y)
out_z1 = paddle.cross(x, y)
print(out_z1.numpy())
#[[-1. -1. -1.]
z1 = paddle.cross(x, y)
print(z1.numpy())
# [[-1. -1. -1.]
# [ 2. 2. 2.]
# [-1. -1. -1.]]
out_z2 = paddle.cross(x, y, dim=1)
print(out_z2.numpy())
#[[0. 0. 0.]
z2 = paddle.cross(x, y, axis=1)
print(z2.numpy())
# [[0. 0. 0.]
# [0. 0. 0.]
# [0. 0. 0.]]
......
......@@ -2,6 +2,53 @@
cumsum
-------------------------------
:doc_source: paddle.fluid.layers.cumsum
.. py:function:: paddle.cumsum(x, axis=None, dtype=None, name=None)
沿给定 ``axis`` 计算张量 ``x`` 的累加和。结果的第一个元素和输入的第一个元素相同。
参数:
- **x** (Tensor) - 累加的输入,需要进行累加操作的Tensor.
- **axis** (int,可选) - 指明需要累加的维度。-1代表最后一维。默认:None,将输入展开为一维变量再进行累加计算。
- **dtype** (str,可选) - 输出Tensor的数据类型,支持int32、int64、float32、float64. 如果指定了,那么在执行操作之前,输入张量将被转换为dtype. 这对于防止数据类型溢出非常有用。默认为:None.
- **name** (str,可选)- 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name` 。
返回:累加的结果,即累加器的输出。
返回类型:Tensor
**代码示例**:
.. code-block:: python
import paddle
from paddle.imperative import to_variable
import numpy as np
paddle.enable_imperative()
data_np = np.arange(12).reshape(3, 4)
data = to_variable(data_np)
y = paddle.cumsum(data)
print(y.numpy())
# [ 0 1 3 6 10 15 21 28 36 45 55 66]
y = paddle.cumsum(data, axis=0)
print(y.numpy())
# [[ 0 1 2 3]
# [ 4 6 8 10]
# [12 15 18 21]]
y = paddle.cumsum(data, axis=-1)
print(y.numpy())
# [[ 0 1 3 6]
# [ 4 9 15 22]
# [ 8 17 27 38]]
y = paddle.cumsum(data, dtype='float64')
print(y.dtype)
# VarType.FP64
......@@ -6,22 +6,23 @@ dot
.. py:function:: paddle.tensor.linalg.dot(x, y, name=None)
:alias_main: paddle.dot
:alias: paddle.dot,paddle.tensor.dot,paddle.tensor.linalg.dot
:alias: paddle.dot, paddle.tensor.dot, paddle.tensor.linalg.dot
该OP计算向量的内积
.. note::
仅支持1维Tensor(向量).
参数:
- **x** (Variable)- 1维 ``Tensor`` 或 ``LoDTensor`` 。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64``
- **y** (Variable)- 1维 ``Tensor`` 或 ``LoDTensor`` 。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64``
- **x** (Variable)- 1维 ``Tensor`` 。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64``
- **y** (Variable)- 1维 ``Tensor`` 。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64``
- **name** (str,可选)- 输出的名字。默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回: ``Tensor`` 或 ``LoDTensor`` ,数据类型与 ``x`` 相同。
返回: ``Tensor`` ,数据类型与 ``x`` 相同。
返回类型: Variable。
......@@ -30,13 +31,12 @@ dot
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(np.random.uniform(0.1, 1, [10]).astype(np.float32))
y = fluid.dygraph.to_variable(np.random.uniform(1, 3, [10]).astype(np.float32))
paddle.enable_imperative()
x_data = np.random.uniform(0.1, 1, [10]).astype(np.float32)
y_data = np.random.uniform(1, 3, [10]).astype(np.float32)
x = paddle.imperative.to_variable(x_data)
y = paddle.imperative.to_variable(y_data)
z = paddle.dot(x, y)
print(z.numpy())
.. _cn_api_tensor_elementwise_equal:
elementwise_equal
-------------------------------
.. py:function:: paddle.elementwise_equal(x, y, name=None)
:alias_main: paddle.elementwise_equal
:alias: paddle.elementwise_equal,paddle.tensor.elementwise_equal,paddle.tensor.logic.elementwise_equal
:update_api: paddle.fluid.layers.equal
该OP返回 :math:`x==y` 逐元素比较x和y是否相等。
参数:
- **x** (Variable) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Variable) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
返回类型:变量(Variable)
**代码示例**:
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
label = fluid.layers.assign(np.array([3, 3], dtype="int32"))
limit = fluid.layers.assign(np.array([3, 2], dtype="int32"))
out1 = paddle.elementwise_equal(x=label, y=limit) #out1=[True, False]
.. _cn_api_tensor_cn_elementwise_max:
elementwise_max
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_max
.. _cn_api_tensor_cn_elementwise_min:
elementwise_min
-------------------------------
:doc_source: paddle.fluid.layers.elementwise_min
.. _cn_api_tensor_equal_all:
equal_all
-------------------------------
.. py:function:: paddle.equal_all(x, y, name=None)
:alias_main: paddle.equal_all
:alias: paddle.equal_all,paddle.tensor.equal_all,paddle.tensor.logic.equal_all
该OP返回:返回的结果只有一个元素值,如果所有相同位置的元素相同返回True,否则返回False。
**注:该OP输出的结果不返回梯度。**
参数:
- **x** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果为Tensor,Tensor数据类型为bool。
返回类型:变量(Tensor)
**代码示例**:
.. code-block:: python
import numpy as np
import paddle
import paddle.imperative as imperative
paddle.enable_imperative()
paddle.enable_imperative()
x = imperative.to_variable(np.array([1, 2, 3]))
y = imperative.to_variable(np.array([1, 2, 3]))
z = imperative.to_variable(np.array([1, 4, 3]))
result1 = paddle.equal_all(x, y)
print(result1.numpy()) # result1 = [True ]
result2 = paddle.equal_all(x, z)
print(result2.numpy()) # result2 = [False ]
......@@ -2,54 +2,35 @@
equal
-------------------------------
.. py:function:: paddle.equal(x, y, axis=-1, name=None)
.. py:function:: paddle.equal(x, y, name=None)
:alias_main: paddle.equal
:alias: paddle.equal,paddle.tensor.equal,paddle.tensor.logic.equal
该OP返回 :math:`x==y` 逐元素比较x和y是否相等,相同位置的元素相同则返回True,否则返回False。使用重载算子 `==` 可以有相同的计算函数效果
该OP返回 :math:`x==y` 逐元素比较x和y是否相等,所有的元素都相同则返回True,否则返回False。
**注:该OP输出的结果不返回梯度。**
参数:
- **x** (Variable) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Variable) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **axis** (int, 可选) - 如果输入的两个Tensor的维度不相同,并且如果y的维度是x的一部分, 那就可以通过broadcast的方式来进行op计算。axis是进行broadcast的开始的维度,具体broadcast的方式可以参考elementwise_add。
- **x** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor只有一个元素值,元素值是True或者False,Tensor数据类型为bool。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
返回类型:变量(Variable
返回类型:变量(Tensor
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
label = fluid.layers.assign(np.array([3, 4], dtype="int32"))
label_1 = fluid.layers.assign(np.array([1, 2], dtype="int32"))
limit = fluid.layers.assign(np.array([3, 4], dtype="int32"))
out1 = paddle.equal(x=label, y=limit) #out1=[True]
out2 = paddle.equal(x=label_1, y=limit) #out2=[False]
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
def gen_data():
return {
"x": np.ones((2, 3, 4, 5)).astype('float32'),
"y": np.zeros((3, 4)).astype('float32')
}
x = fluid.data(name="x", shape=[2,3,4,5], dtype='float32')
y = fluid.data(name="y", shape=[3,4], dtype='float32')
out = paddle.equal(x, y, axis=1)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
res = exe.run(feed=gen_data(),
fetch_list=[out])
print(res[0]) #[False]
import paddle.imperative as imperative
paddle.enable_imperative()
x = imperative.to_variable(np.array([1, 2, 3]))
y = imperative.to_variable(np.array([1, 3, 2]))
result1 = paddle.equal(x, y)
print(result1.numpy()) # result1 = [True False False]
......@@ -3,35 +3,36 @@
eye
-------------------------------
.. py:function:: paddle.tensor.eye(num_rows, num_columns=None, out=None, dtype='float32', stop_gradient=True, name=None)
.. py:function:: paddle.tensor.eye(num_rows, num_columns=None, dtype=None, name=None)
该OP用来构建单位矩阵
该OP用来构建二维Tensor(主对角线元素为1,其他元素为0)
参数:
- **num_rows** (int) - 生成单位矩阵的行数,数据类型为非负int32。
- **num_columns** (int) - 生成单位矩阵的列数,数据类型为非负int32。若为None,则默认等于num_rows。
- **out** (Variable, 可选) - 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **dtype** (string, 可选) - 返回张量的数据类型,可为int32,int64,float16,float32,float64。
- **stop_gradient** (bool, 可选) - 是否对此OP停止计算梯度,默认值为False。
- **num_rows** (int) - 生成二维Tensor的行数,数据类型为非负int32。
- **num_columns** (int,可选) - 生成二维Tensor的列数,数据类型为非负int32。若为None,则默认等于num_rows。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选) - 返回Tensor的数据类型,可为float16,float32,float64, int32, int64。若为None, 则默认等于float32。
- **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:shape为 [num_rows, num_columns]的张量
返回: ``shape`` 为 [num_rows, num_columns]的Tensor
返回类型:Variable(Tensor|LoDTensor)数据类型为int32,int64,float16,float32,float64的Tensor或者LoDTensor。
抛出异常:
- ``TypeError``: - 如果 ``dtype`` 的类型不是float16, float32, float64, int32, int64其中之一。
- ``TypeError``: - 如果 ``num_columns`` 不是非负整数或者 ``num_rows`` 不是非负整数。
**代码示例**:
.. code-block:: python
import paddle
data = paddle.eye(3, dtype='int32') # paddle.eye 等价于 paddle.tensor.eye
# [[1, 0, 0]
# [0, 1, 0]
# [0, 0, 1]]
paddle.enable_imperative() # Now we are in imperative mode
data = paddle.eye(3, dtype='int32')
# [[1 0 0]
# [0 1 0]
# [0 0 1]]
data = paddle.eye(2, 3, dtype='int32')
# [[1, 0, 0]
# [0, 1, 0]]
# [[1 0 0]
# [0 1 0]]
......
......@@ -3,53 +3,41 @@
flip
-------------------------------
.. py:function:: paddle.flip(input, dims, name=None):
.. py:function:: paddle.flip(x, axis, name=None):
:alias_main: paddle.flip
:alias: paddle.flip,paddle.tensor.flip,paddle.tensor.manipulation.flip
:alias: paddle.flip, paddle.tensor.flip, paddle.tensor.manipulation.flip
该OP沿指定轴反转n维tensor.
参数:
- **input** (Variable) - 输入Tensor。维度为多维,数据类型为bool, int32, int64, float32或float64。
- **dims** (list) - 需要翻转的轴。当 ``dims[i] < 0`` 时,实际的计算维度为 rank(input) + dims[i],其中i为dims的索引。
- **x** (Variable) - 输入张量。维度为多维,数据类型为bool, int32, int64, float32或float64。
- **axis** (list) - 需要翻转的轴。当 ``axis[i] < 0`` 时,实际的计算维度为 ndim(x) + axis[i],其中i为axis的索引。
- **name** (str|None) - 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。默认值为None。
返回:在指定dims上翻转后的Tensor,与输入input数据类型相同。
返回:在指定axis上翻转后的张量,与输入x数据类型相同。
返回类型:Variable,与输入input数据类型相同。
返回类型:Variable,与输入x数据类型相同。
抛出异常:
- ``TypeError`` - 当输出 ``out`` 和输入 ``input`` 数据类型不一致时候。
- ``ValueError`` - 当参数 ``dims`` 不合法时。
- ``TypeError`` - 当输出 ``out`` 和输入 ``x`` 数据类型不一致时候。
- ``ValueError`` - 当参数 ``axis`` 不合法时。
**代码示例1**:
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
input = fluid.data(name="x", shape=[-1, 2, 2], dtype='float32')
output = paddle.flip(input, dims=[0, 1])
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
img = np.arange(12).reshape((3,2,2)).astype(np.float32)
res = exe.run(fluid.default_main_program(), feed={'x':img}, fetch_list=[output])
print(res) # [[[10,11][8, 9]],[[6, 7],[4, 5]] [[2, 3],[0, 1]]]
**代码示例2**:
paddle.enable_imperative()
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
img = np.arange(12).reshape((3,2,2)).astype(np.float32)
with fluid.dygraph.guard():
inputs = fluid.dygraph.to_variable(img)
ret = paddle.flip(inputs, [0, 1])
print(ret.numpy()) # [[[10,11][8, 9]],[[6, 7],[4, 5]] [[2, 3],[0, 1]]]
image_shape=(3, 2, 2)
x = np.arange(image_shape[0] * image_shape[1] * image_shape[2]).reshape(image_shape)
x = x.astype('float32')
img = paddle.imperative.to_variable(x)
out = paddle.flip(img, [0,1])
print(out) # [[[10,11][8, 9]],[[6, 7],[4, 5]] [[2, 3],[0, 1]]]
......@@ -3,33 +3,24 @@
full
-------------------------------
.. py:function:: paddle.full(shape, fill_value, out=None, dtype=None, device=None, stop_gradient=True, name=None)
.. py:function:: paddle.full(shape, fill_value, dtype=None, name=None)
:alias_main: paddle.full
:alias: paddle.full,paddle.tensor.full,paddle.tensor.creation.full
:update_api: paddle.fluid.layers.fill_constant
该OP创建一个和具有相同的形状和数据类型的Tensor,其中元素值均为fill_value。
该OP创建形状大小为shape并且数据类型为dtype的Tensor,其中元素值均为 ``fill_value``。
参数:
- **shape** (list|tuple|Variable) – 指定创建Tensor的形状(shape)。
- **fill_value** (bool|float16|float32|int32|int64|Variable) - 用于初始化输出Tensor的常量数据的值。默认为0。注意:该参数不可超过输出变量数据类型的表示范围。
- **out** (Variable,可选) - 输出Tensor。如果为None,则创建一个新的Tensor作为输出Tensor,默认值为None。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若参数为空,则输出变量的数据类型和输入变量相同,默认值为None。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
- **stop_gradient** (bool,可选) – 是否从此 Variable 开始,之前的相关部分都停止梯度计算,默认为True。
- **shape** (list|tuple|Tensor) – 指定创建Tensor的形状(shape), 数据类型为int32 或者int64。
- **fill_value** (bool|float|int|Tensor) - 用于初始化输出Tensor的常量数据的值。注意:该参数不可超过输出变量数据类型的表示范围。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若为None,则输出变量的数据类型和输入变量相同,默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:返回一个存储结果的Tensor。
返回:返回一个存储结果的Tensor,数据类型和dtype相同
返回类型:Variable
抛出异常:
- ``TypeError`` - 如果 ``dtype`` 的类型不是bool, float16, float32, float64, int32, int64其中之一。
- ``TypeError`` - 如果 ``out`` 的元素的类型不是Variable。
- ``TypeError`` - 如果 ``shape`` 的类型不是list或tuple或Varibable。
- ``TypeError``: - 如果 ``dtype`` 的类型不是bool, float16, float32, float64, int32, int64其中之一。
- ``TypeError``: - 如果 ``shape`` 的类型不是list或tuple或Tensor。当 ``shape`` 是Tensor的时候,其数据类型不是int32或者int64时。
**代码示例**:
......@@ -37,17 +28,24 @@ full
import paddle
data1 = paddle.full(shape=[2,1], fill_value=0, dtype='int64') # data1=[[0],[0]]
data2 = paddle.full(shape=[2,1], fill_value=5, dtype='int64', device='gpu') # data2=[[5],[5]]
# attr shape is a list which contains Variable Tensor.
positive_2 = paddle.fill_constant([1], "int32", 2)
data3 = paddle.full(shape=[1, positive_2], dtype='float32', fill_value=1.5) # data3=[1.5, 1.5]
# attr shape is an Variable Tensor.
shape = paddle.fill_constant([1,2], "int32", 2) # shape=[2,2]
data4 = paddle.full(shape=shape, dtype='bool', fill_value=True) # data4=[[True,True],[True,True]]
# attr value is an Variable Tensor.
val = paddle.fill_constant([1], "float32", 2.0) # val=[2.0]
data5 = paddle.full(shape=[2,1], fill_value=val, dtype='float32') #data5=[[2.0],[2.0]]
paddle.enable_imperative() # Now we are in imperative mode
data1 = paddle.full(shape=[2,1], fill_value=0, dtype='int64')
#[[0]
# [0]]
# attr shape is a list which contains Tensor.
positive_3 = paddle.fill_constant([1], "int32", 2)
data3 = paddle.full(shape=[1, positive_2], dtype='float32', fill_value=1.5)
# [[1.5 1.5]]
# attr shape is a Tensor.
shape = paddle.fill_constant([2], "int32", 2)
data4 = paddle.full(shape=shape, dtype='bool', fill_value=True)
# [[True True]
# [True True]]
# attr fill_value is a Tensor.
val = paddle.fill_constant([1], "float32", 2.0)
data5 = paddle.full(shape=[2,1], fill_value=val, dtype='float32') i
# [[2.0]
# [2.0]]
......@@ -3,40 +3,33 @@
full_like
-------------------------------
.. py:function:: paddle.full_like(input, fill_value, out=None, dtype=None, device=None, stop_gradient=True, name=None)
.. py:function:: paddle.full_like(x, fill_value, dtype=None, name=None)
:alias_main: paddle.full_like
:alias: paddle.full_like,paddle.tensor.full_like,paddle.tensor.creation.full_like
该OP创建一个和input具有相同的形状和数据类型的Tensor,其中元素值均为fill_value。
该OP创建一个和 ``x`` 具有相同的形状并且数据类型为 ``dtype`` 的Tensor,其中元素值均为 ``fill_value`` , 当 ``dtype`` 为None的时候,Tensor数据类型和输入 ``x`` 相同。
参数:
- **input** (Variable) – 指定输入为一个多维的Tensor,数据类型可以是bool,float16,float32,float64,int32,int64。
- **fill_value** (bool|float|int) - 用于初始化输出Tensor的常量数据的值。默认为0。注意:该参数不可超过输出变量数据类型的表示范围。
- **out** (Variable,可选) - 输出Tensor。如果为None,则创建一个新的Tensor作为输出Tensor,默认值为None。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若参数为空,则输出变量的数据类型和输入变量相同,默认值为None。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
- **stop_gradient** (bool,可选) – 是否从此 Variable 开始,之前的相关部分都停止梯度计算,默认为True。
- **x** (Tensor) – 输入Tensor, 输出Tensor和x具有相同的形状,x的数据类型可以是bool,float16,float32,float64,int32,int64。
- **fill_value** (bool|float|int) - 用于初始化输出张量的常量数据的值。注意:该参数不可超过输出变量数据类型的表示范围。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若参数为None,则输出变量的数据类型和输入变量相同,默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:返回一个存储结果的Tensor。
返回:返回一个存储结果的Tensor,数据类型和dtype相同
返回类型:Variable
抛出异常:
- ``TypeError``: - 当 ``x`` 的数据类型不是bool、float16、float32、float64、int32、int64其中之一。
- ``TypeError``: - 当 ``dtype`` 不是bool、float16、float32、float64、int32、int64或者None其中之一。
**代码示例**:
**代码示例**:
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
input = fluid.data(name='input', dtype='float32', shape=[2, 3])
paddle.enable_imperative() # Now we are in imperative mode
input = paddle.full(shape=[2, 3], fill_value=0.0, dtype='float32', name='input')
output = paddle.full_like(input, 2.0)
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
img=np.array([[1, 2, 3], [4, 5, 6]]).astype(np.float32)
res = exe.run(fluid.default_main_program(), feed={'input':img}, fetch_list=[output])
print(res) # [array([[2., 2., 2.], [2., 2., 2.]], dtype=float32)]
# [[2. 2. 2.]
# [2. 2. 2.]]
......@@ -2,6 +2,36 @@
greater_equal
-------------------------------
:doc_source: paddle.fluid.layers.greater_equal
.. py:function:: paddle.greater_equal(x, y, name=None)
:alias_main: paddle.greater_equal
:alias: paddle.greater_equal,paddle.tensor.greater_equal,paddle.tensor.logic.greater_equal
该OP逐元素地返回 :math:`x >= y` 的逻辑值,相同位置前者输入大于等于后者输入则返回True,否则返回False。使用重载算子 `>=` 可以有相同的计算函数效果。
**注:该OP输出的结果不返回梯度。**
参数:
- **x** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
返回类型:变量(Tensor)
**代码示例**:
.. code-block:: python
import numpy as np
import paddle
import paddle.imperative as imperative
paddle.enable_imperative()
x = imperative.to_variable(np.array([1, 2, 3]))
y = imperative.to_variable(np.array([1, 3, 2]))
result1 = paddle.greater_equal(x, y)
print(result1.numpy()) # result1 = [True False True]
......@@ -2,6 +2,34 @@
greater_than
-------------------------------
:doc_source: paddle.fluid.layers.greater_than
.. py:function:: paddle.greater_than(x, y, name=None)
:alias_main: paddle.greater_than
:alias: paddle.greater_than,paddle.tensor.greater_than,paddle.tensor.logic.greater_than
该OP返回 :math:`x>y` 逐元素比较x和y是否相等,相同位置前者输入大于等于后者输入则返回True,否则返回False。使用重载算子 `>` 可以有相同的计算函数效果
**注:该OP输出的结果不返回梯度。**
参数:
- **x** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
返回类型:变量(Tensor)
**代码示例**:
.. code-block:: python
import numpy as np
import paddle
import paddle.imperative as imperative
paddle.enable_imperative()
x = imperative.to_variable(np.array([1, 2, 3]))
y = imperative.to_variable(np.array([1, 3, 2]))
result1 = paddle.greater_than(x, y)
print(result1.numpy()) # result1 = [False False True]
.. _cn_api_tensor_histogram:
histogram
-------------------------------
.. py:function:: paddle.histogram(input, bins=100, min=0, max=0):
计算输入张量的直方图。以min和max为range边界,将其均分成bins个直条,然后将排序好的数据划分到各个直条(bins)中。如果min和max都为0, 则利用数据中的最大最小值作为边界。
参数:
- **input** (Variable) - 输入Tensor。维度为多维,数据类型为int32, int64, float32或float64。
- **bins** (int) - 直方图 bins(直条)的个数,默认为100。
- **min** (int) - range的下边界(包含),默认为0。
- **max** (int) - range的上边界(包含),默认为0。
返回:直方图。
返回类型:Variable,数据为int64类型,维度为(nbins,)。
抛出异常:
- ``ValueError`` - 当输入 ``bin``, ``min``, ``max``不合法时。
**代码示例1**:
.. code-block:: python
import paddle
import numpy as np
startup_program = paddle.Program()
train_program = paddle.Program()
with paddle.program_guard(train_program, startup_program):
inputs = paddle.data(name='input', dtype='int32', shape=[2,3])
output = paddle.histogram(inputs, bins=5, min=1, max=5)
place = paddle.CPUPlace()
exe = paddle.Executor(place)
exe.run(startup_program)
img = np.array([[2, 4, 2], [2, 5, 4]]).astype(np.int32)
res = exe.run(train_program,
feed={'input': img},
fetch_list=[output])
print(np.array(res[0])) # [0, 3, 0, 2, 1]
**代码示例2**:
.. code-block:: python
import paddle
import numpy as np
with paddle.imperative.guard(paddle.CPUPlace()):
inputs_np = np.array([0.5, 1.5, 2.5]).astype(np.float)
inputs = paddle.imperative.to_variable(inputs_np)
result = paddle.histogram(inputs, bins=5, min=1, max=5)
print(result) # [1, 1, 0, 0, 0]
......@@ -3,48 +3,47 @@
index_select
-------------------------------
.. py:function:: paddle.index_select(input, index, dim=0)
.. py:function:: paddle.index_select(x, index, axis=0, name=None)
:alias_main: paddle.index_select
:alias: paddle.index_select,paddle.tensor.index_select,paddle.tensor.search.index_select
该OP沿着指定维度 ``dim`` 对输入 ``input`` 进行索引,取 ``index`` 中指定的相应项,然后返回到一个新的张量。这里 ``index`` 是一个 ``1-D`` 张量。除 ``dim`` 维外,返回的张量其余维度大小同输入 ``input`` , ``dim`` 维大小等于 ``index`` 的大小。
该OP沿着指定轴 ``axis`` 对输入 ``x`` 进行索引,取 ``index`` 中指定的相应项,创建并返回到一个新的Tensor。这里 ``index`` 是一个 ``1-D`` Tensor。除 ``axis`` 轴外,返回的Tensor其余维度大小和输入 ``x`` 相等 , ``axis`` 维度的大小等于 ``index`` 的大小。
**参数**:
- **input** (Variable)– 输入张量。
- **index** (Variable)– 包含索引下标的一维张量。
- **dim** (int, optional) – 索引轴,若未指定,则默认选取第一维。
- **x** (Tensor)– 输入Tensor。 ``x`` 的数据类型可以是float32,float64,int32,int64。
- **index** (Tensor)– 包含索引下标的一维Tensor。
- **axis** (int, 可选) – 索引轴,若未指定,则默认选取第0维。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
**返回**:
-**Variable** ,数据类型同输入。
-**Tensor**: 返回一个数据类型同输入的Tensor。
抛出异常:
- ``TypeError`` - 当 ``x`` 或者 ``index`` 的类型不是Tensor。
- ``TypeError`` - 当 ``x`` 的数据类型不是float32、float64、int32、int64其中之一或者 ``index`` 的数据类型不是int32、int64其中之一。
**代码示例**:
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
paddle.disable_static() # Now we are in imperative mode
data = np.array([[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, 10.0, 11.0, 12.0]])
data_index = np.array([0, 1, 1]).astype('int32')
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(data)
index = fluid.dygraph.to_variable(data_index)
out_z1 = paddle.index_select(x, index)
print(out_z1.numpy())
x = paddle.to_variable(data)
index = paddle.to_variable(data_index)
out_z1 = paddle.index_select(x=x, index=index)
#[[1. 2. 3. 4.]
# [5. 6. 7. 8.]
# [5. 6. 7. 8.]]
out_z2 = paddle.index_select(x, index, dim=1)
print(out_z2.numpy())
out_z2 = paddle.index_select(x=x, index=index, axis=1)
#[[ 1. 2. 2.]
# [ 5. 6. 6.]
# [ 9. 10. 10.]]
......@@ -3,30 +3,28 @@
inverse
-------------------------------
.. py:function:: paddle.inverse(input, out=None, name=None)
.. py:function:: paddle.inverse(x, name=None)
:alias_main: paddle.inverse
:alias: paddle.inverse,paddle.tensor.inverse,paddle.tensor.math.inverse
:alias: paddle.inverse, paddle.tensor.inverse, paddle.tensor.math.inverse
计算方阵的逆。方阵是行数和列数相等的矩阵。输入可以是一个方阵(2-D张量),或者是批次方阵(维数大于2时)。
**参数**:
- **input** (Variable) – 输入张量,最后两维的大小必须相等。如果输入张量的维数大于2,则高维部分代表2-D矩阵的批次(batch)。支持的数据类型:float32,float64。
- **out** (Variable,可选) – 指定求和的结果Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
**参数**:
- **x** (Variable) – 输入张量,最后两维的大小必须相等。如果输入张量的维数大于2,则被视为2-D矩阵的批次(batch)。支持的数据类型:float32,float64。
- **name** (str,可选) – 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None。
**返回**数据类型同输入。
**返回**: 数据类型同输入。
返回类型Variable
返回类型: Variable
抛出异常:
- :code:`TypeError` ,input不是Variable类型,或者数据类型不是float32、float64时
- :code:`ValueError` ,input的维数小于2时
- :code:`TypeError` ,out不是Variable类型,或者数据类型和input不相同时
- :code:`TypeError` ,x不是Variable类型,或者数据类型不是float32、float64时
- :code:`ValueError` ,x的维数小于2时
**代码示例**
**代码示例**:
.. code-block:: python
......@@ -34,7 +32,7 @@ inverse
import paddle
mat_np = np.array([[2, 0], [0, 2]]).astype("float32")
with paddle.imperative.guard():
paddle.enable_imperative()
mat = paddle.imperative.to_variable(mat_np)
inv = paddle.inverse(mat)
print(inv.numpy()) # [[0.5, 0], [0, 0.5]]
print(inv) # [[0.5, 0], [0, 0.5]]
......@@ -2,6 +2,36 @@
less_equal
-------------------------------
:doc_source: paddle.fluid.layers.less_equal
.. py:function:: paddle.less_equal(x, y, name=None)
:alias_main: paddle.less_equal
:alias: paddle.less_equal,paddle.tensor.less_equal,paddle.tensor.logic.less_equal
该OP逐元素地返回 :math:`x <= y` 的逻辑值,相同位置前者输入小于等于后者输入则返回True,否则返回False。使用重载算子 `<=` 可以有相同的计算函数效果。
**注:该OP输出的结果不返回梯度。**
参数:
- **x** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
返回类型:变量(Tensor)
**代码示例**:
.. code-block:: python
import numpy as np
import paddle
import paddle.imperative as imperative
paddle.enable_imperative()
x = imperative.to_variable(np.array([1, 2, 3]))
y = imperative.to_variable(np.array([1, 3, 2]))
result1 = paddle.less_equal(x, y)
print(result1.numpy()) # result1 = [True True False]
......@@ -2,6 +2,36 @@
less_than
-------------------------------
:doc_source: paddle.fluid.layers.less_than
.. py:function:: paddle.less_than(x, y, name=None)
:alias_main: paddle.less_than
:alias: paddle.less_than,paddle.tensor.less_than,paddle.tensor.logic.less_than
该OP逐元素地返回 :math:`x < y` 的逻辑值,相同位置前者输入小于后者输入则返回True,否则返回False。使用重载算子 `<` 可以有相同的计算函数效果。
**注:该OP输出的结果不返回梯度。**
参数:
- **x** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
返回类型:变量(Tensor)
**代码示例**:
.. code-block:: python
import numpy as np
import paddle
import paddle.imperative as imperative
paddle.enable_imperative()
x = imperative.to_variable(np.array([1, 2, 3]))
y = imperative.to_variable(np.array([1, 3, 2]))
result1 = paddle.less_than(x, y)
print(result1.numpy()) # result1 = [False True False]
......@@ -3,31 +3,33 @@
linspace
-------------------------------
.. py:function:: paddle.linspace(start, stop, num, dtype, out=None, device=None, name=None)
.. py:function:: paddle.linspace(start, stop, num, dtype=None, name=None)
:alias_main: paddle.linspace
:alias: paddle.linspace,paddle.tensor.linspace,paddle.tensor.creation.linspace
:update_api: paddle.fluid.layers.linspace
:alias: paddle.tensor.linspace, paddle.tensor.creation.linspace
该OP在给定区间内返回固定数目的均匀间隔的值
该OP返回一个Tensor,Tensor的值为在区间start和stop上均匀间隔的num个值,输出Tensor的长度为num
**注意:该OP不进行梯度计算**
参数:
- **start** (float|Variable) – start是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **stop** (float|Variable) – end是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **num** (int|Variable) – num是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。
- **dtype** (string) – 输出Tensor的数据类型,可以是‘float32’或者是‘float64’。
- **out** (Variable,可选) – 指定存储运算结果的Tensor。如果设置为None或者不设置,将创建新的Tensor存储运算结果,默认值为None。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
- **start** (float|Tensor) – ``start`` 是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **stop** (float|Tensor) – ``end`` 是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **num** (int|Tensor) – ``num`` 是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。
- **dtype** (np.dtype|core.VarDesc.VarType|str,可选) – 输出Tensor的数据类型,可以是float32或者是float64。如果dtype为None,默认类型为float32。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的数据类型是float32或float64,表示等间隔划分结果的1-D Tensor,该Tensor的shape大小为 :math:`[num]` ,在mum为1的情况下,仅返回包含start元素值的Tensor。
返回类型:Variable
抛出异常:
- ``TypeError`` - 当start或者stop的数据类型不是float32或者float64。
- ``TypeError`` - 当num的数据类型不是float32或者float64。
- ``TypeError`` - 当dtype的类型不是float32或者float64。
**代码示例**:
.. code-block:: python
......
......@@ -3,7 +3,7 @@
log1p
-------------------------------
.. py:function:: paddle.tensor.log1p(x, out=None, name=None)
.. py:function:: paddle.log1p(x, name=None)
:alias_main: paddle.log1p
:alias: paddle.log1p,paddle.tensor.log1p,paddle.tensor.math.log1p
......@@ -18,13 +18,12 @@ log1p
参数:
- **x** (Variable) – 该OP的输入为LodTensor/Tensor。数据类型为float32,float64。
- **out** (Variable, 可选) - 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **x** (Tensor) – 指定输入为一个多维的Tensor。数据类型为float32,float64。
- **name** (str,可选) – 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None。
返回:Log1p算子自然对数输出
返回类型: Variable - 该OP的输出为LodTensor/Tensor,数据类型为输入一致。
返回类型: Tensor - 该OP的输出为一个多维的Tensor,数据类型为输入一致。
**代码示例**
......@@ -32,18 +31,14 @@ log1p
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
x = fluid.data(name="x", shape=[2,1], dtype="float32")
res = paddle.log1p(x) # paddle.log1p等价于 paddle.tensor.log1p
# 举例选择CPU计算环境
exe = fluid.Executor(fluid.CPUPlace())
# 执行静态图,输出结果
x_i = np.array([[0], [1]]).astype(np.float32)
res_val, = exe.run(fluid.default_main_program(), feed={'x':x_i}, fetch_list=[res])
print(res_val) # [[0.], [0.6931472]]
paddle.enable_imperative()
x = np.array([[1, 2], [3, 4]]).astype('float32')
x1 = paddle.imperative.to_variable(x)
out1 = paddle.log1p(x1)
print(out1.numpy())
# [[0.6931472 1.0986123]
# [1.3862944 1.609438 ]]
......@@ -2,6 +2,43 @@
log
-------------------------------
:doc_source: paddle.fluid.layers.log
.. py:function:: paddle.log(x, name=None)
:alias_main: paddle.log
:alias: paddle.log,paddle.tensor.log,paddle.tensor.math.log
:old_api: paddle.fluid.layers.log
Log激活函数(计算自然对数)
.. math::
\\Out=ln(x)\\
参数:
- **x** (Tensor) – 指定输入为一个多维的Tensor。数据类型为float32,float64。
- **name** (str,可选) – 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None。
返回:Log算子自然对数输出
返回类型: Tensor - 该OP的输出为一个多维的Tensor,数据类型为输入一致。
**代码示例**
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = np.array([[1, 2], [3, 4]]).astype('float32')
x1 = paddle.imperative.to_variable(x)
out1 = paddle.log(x1)
print(out1.numpy())
# [[0. 0.6931472]
# [1.0986123 1.3862944]]
.. _cn_api_paddle_tensor_max:
max
-------------------------------
.. py:function:: paddle.tensor.max(input, dim=None, keep_dim=False, out=None, name=None)
.. py:function:: paddle.tensor.max(x, axis=None, keepdim=False, name=None)
:alias_main: paddle.max
:alias: paddle.max,paddle.tensor.max,paddle.tensor.math.max
:update_api: paddle.fluid.layers.reduce_max
该OP是对指定维度上的Tensor元素求最大值运算,并输出相应的计算结果。等价于 :ref:`cn_api_fluid_layers_reduce_max`
该OP是对指定维度上的Tensor元素求最大值运算,并输出相应的计算结果。
参数
- **input** (Variable)- 输入变量为多维Tensor或LoDTensor,支持数据类型为float32,float64,int32,int64。
- **dim** (list | int ,可选)- 求最大值运算的维度。如果为None,则计算所有元素的最大值并返回包含单个元素的Tensor变量,否则必须在 :math:`[−rank(input),rank(input)]` 范围内。如果 :math:`dim [i] <0` ,则维度将变为 :math:`rank+dim[i]` ,默认值为None
- **keep_dim** (bool)- 是否在输出Tensor中保留减小的维度。如 keep_dim 为true,否则结果张量的维度将比输入张量小,默认值为False。
- **out** (Variable, 可选) - 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果
参数
:::::::::
- **x** (Tensor)- Tensor,支持数据类型为float32,float64,int32,int64
- **axis** (list | int ,可选)- 求最大值运算的维度。如果为None,则计算所有元素的最大值并返回包含单个元素的Tensor变量,否则必须在 :math:`[-x.ndim, x.ndim]` 范围内。如果 :math:`axis[i] <0` ,则维度将变为 :math:`x.ndim+axis[i]` ,默认值为None。
- **keepdim** (bool)- 是否在输出Tensor中保留减小的维度。如果keepdim 为 False,结果张量的维度将比输入张量的小,默认值为False
- **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回: 在指定dim上进行求最大值运算的Tensor,数据类型和输入数据类型一致。
返回
:::::::::
Tensor, 在指定axis上进行求最大值运算的Tensor,数据类型和输入数据类型一致。
返回类型: 变量(Variable)
**代码示例**
代码示例
::::::::::
.. code-block:: python
import numpy as np
import paddle
import paddle.fluid as fluid
# x是一个Tensor,元素如下:
# [[0.2, 0.3, 0.5, 0.9]
# [0.1, 0.2, 0.6, 0.7]]
# 接下来的示例中,我们在每处函数调用后面都标注出了它的结果张量。
x = fluid.data(name='x', shape=[2, 4], dtype='float32')
# paddle.max 等价于 paddle.tensor.max
paddle.max(x) # [0.9]
paddle.max(x, dim=0) # [0.2, 0.3, 0.6, 0.9]
paddle.max(x, dim=-1) # [0.9, 0.7]
paddle.max(x, dim=1, keep_dim=True) # [[0.9], [0.7]]
# y是一个shape为[2, 2, 2]的Tensor,元素如下:
# [[[1.0, 2.0], [3.0, 4.0]],
# [[5.0, 6.0], [7.0, 8.0]]]
# 接下来的示例中,我们在每处函数调用后面都标注出了它的结果张量。
y = fluid.data(name='y', shape=[2, 2, 2], dtype='float32')
paddle.max(y, dim=[1, 2]) # [4.0, 8.0]
paddle.max(y, dim=[0, 1]) # [7.0, 8.0]
paddle.disable_static()
# data_x is a variable with shape [2, 4]
# the axis is a int element
data_x = np.array([[0.2, 0.3, 0.5, 0.9],
[0.1, 0.2, 0.6, 0.7]])
x = paddle.to_variable(data_x)
result1 = paddle.max(x)
print(result1.numpy())
#[0.9]
result2 = paddle.max(x, axis=0)
print(result2.numpy())
#[0.2 0.3 0.6 0.9]
result3 = paddle.max(x, axis=-1)
print(result3.numpy())
#[0.9 0.7]
result4 = paddle.max(x, axis=1, keepdim=True)
print(result4.numpy())
#[[0.9]
# [0.7]]
# data_y is a variable with shape [2, 2, 2]
# the axis is list
data_y = np.array([[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]]])
y = paddle.to_variable(data_y)
result5 = paddle.max(y, axis=[1, 2])
print(result5.numpy())
#[4. 8.]
result6 = paddle.max(y, axis=[0, 1])
print(result6.numpy())
#[7. 8.]
.. _cn_api_paddle_tensor_maximum:
maximum
-------------------------------
.. py:function:: paddle.tensor.maximum(x, y, axis=-1, name=None)
:alias_main: paddle.maximum
:alias: paddle.maximum,paddle.tensor.maximum,paddle.tensor.math.maximum
该OP逐元素对比输入的两个多维Tensor,并且把各个位置更大的元素保存到返回结果中。
等式是:
.. math::
Out = max(X, Y)
- :math:`X` :多维Tensor。
- :math:`Y` :多维Tensor。
此运算算子有两种情况:
1. :math:`Y` 的 ``shape`` 与 :math:`X` 相同。
2. :math:`Y` 的 ``shape`` 是 :math:`X` 的连续子序列。
对于情况2:
1. 用 :math:`Y` 的 ``shape`` 匹配 :math:`X` 的 ``shape``,其中 ``axis`` 是 :math:`Y` 在 :math:`X` 上的起始维度的位置。
2. 如果 ``axis`` < 0(默认值为-1),则 :math:`axis = abs(X.ndim - Y.ndim) - axis - 1` 。
3. 考虑到子序列, :math:`Y` 的大小为1的尾部维度将被忽略,例如shape(Y)=(2,1)=>(2)。
例如:
.. code-block:: text
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
具体的飞桨的广播(broadcasting)机制可以参考 `<<PaddlePaddle广播机制文档>> <https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/beginners_guide/basic_concept/broadcasting.rst>`_ 。
参数
:::::::::
- **x** (Tensor)- 多维Tensor。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64`` 。
- **y** (Tensor)- 多维Tensor。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64`` 。
- **axis** (int32, 可选)- Y的维度对应到X维度上时的索引。默认值为 -1。
- **name** (string, 可选)- 输出的名字。默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回
:::::::::
Tensor,维度和数据类型与 ``x`` 相同的多维Tensor。
代码示例
::::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x_data = np.array([[1, 2], [3, 4]], dtype=np.float32)
y_data = np.array([[5, 6], [7, 8]], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.maximum(x, y)
print(res.numpy())
#[[5. 6.]
# [7. 8.]]
x_data = np.array([[[1, 2, 3], [1, 2, 3]]], dtype=np.float32)
y_data = np.array([1, 2], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.maximum(x, y, axis=1)
print(res.numpy())
#[[[1. 2. 3.]
# [2. 2. 3.]]]
x_data = np.array([2, 3, 5], dtype=np.float32)
y_data = np.array([1, 4, np.nan], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.maximum(x, y)
print(res.numpy())
#[ 2. 4. nan]
x_data = np.array([5, 3, np.inf], dtype=np.float32)
y_data = np.array([1, 4, 5], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.maximum(x, y)
print(res.numpy())
#[ 5. 4. inf]
......@@ -2,6 +2,52 @@
mean
-------------------------------
:doc_source: paddle.fluid.layers.mean
.. py:function:: paddle.mean(x, axis=None, keepdim=False, name=None)
该OP沿 ``axis`` 计算 ``x`` 的平均值。
参数
::::::::::
- x (Tensor) - 输入的Tensor,数据类型为:float32、float64、int32.int64 。
- axis (int|list|tuple, 可选) - 指定对 ``x`` 进行计算的轴。``axis`` 可以是int、list(int)、tuple(int)。如果 ``axis`` 包含多个维度,则沿着 ``axis`` 中的所有轴进行计算。``axis`` 或者其中的元素值应该在范围[-D, D)内,D是 ``x`` 的维度。如果 ``axis`` 或者其中的元素值小于0,则等价于 :math:`axis + D` 。如果 ``axis`` 是None,则对 ``x`` 的全部元素计算平均值。默认值为None。
- keepdim (bool, 可选) - 是否在输出Tensor中保留减小的维度。如果 ``keep_dim`` 为True,则输出Tensor和 ``x`` 具有相同的维度(减少的维度除外,减少的维度的大小为1)。否则,输出Tensor的形状会在 ``axsi`` 上进行squeeze操作。默认值为False。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回
::::::::::
``Tensor`` ,沿着 ``axis`` 进行平均值计算的结果,数据类型和 ``x`` 相同。
代码示例
::::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.array([[[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]],
[[13, 14, 15, 16],
[17, 18, 19, 20],
[21, 22, 23, 24]]], 'float32')
x = paddle.to_variable(x)
out1 = paddle.mean(x)
# [12.5]
out2 = paddle.mean(x, axis=-1)
# [[ 2.5 6.5 10.5]
# [14.5 18.5 22.5]]
out3 = paddle.mean(x, axis=-1, keepdim=True)
# [[[ 2.5]
# [ 6.5]
# [10.5]]
# [[14.5]
# [18.5]
# [22.5]]]
out4 = paddle.mean(x, axis=[0, 2])
# [ 8.5 12.5 16.5]
......@@ -4,58 +4,40 @@
meshgrid
-------------------------------
.. py:function:: paddle.tensor.meshgrid(input, name=None)
.. py:function:: paddle.tensor.meshgrid(*args, **kargs)
:alias_main: paddle.meshgrid
:alias: paddle.meshgrid,paddle.tensor.meshgrid,paddle.tensor.creation.meshgrid
:alias: paddle.meshgrid, paddle.tensor.meshgrid, paddle.tensor.creation.meshgrid
该OP的输入是tensor list, 包含 k 个一维Tensor,对每个Tensor做扩充操作,输出 k 个 k 维tensor
该OP的输入是张量或者包含张量的列表, 包含 k 个一维张量,对每个张量做扩充操作,输出 k 个 k 维张量
参数:
- **input** (Variable)- 输入变量为 k 个一维Tensor,形状分别为(N1,), (N2,), ..., (Nk, )。支持数据类型为float32,float64,int32,int64。
- **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
- \* **args** (Variable|Variable数组)- 输入变量为 k 个一维张量,形状分别为(N1,), (N2,), ..., (Nk, )。支持数据类型为float32,float64,int32,int64。
- ** **kargs** (可选)- 目前只接受name参数(str),具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:
k 个 k 维Tensor,每个Tensor的形状均为(N1, N2, ..., Nk)。
k 个 k 维张量,每个张量的形状均为(N1, N2, ..., Nk)。
返回类型: 变量(Variable)
**代码示例**
.. code-block:: python
#静态图示例
import paddle
import paddle.fluid as fluid
import numpy as np
x = fluid.data(name='x', shape=[100], dtype='int32')
y = fluid.data(name='y', shape=[200], dtype='int32')
input_1 = np.random.randint(0, 100, [100, ]).astype('int32')
input_2 = np.random.randint(0, 100, [200, ]).astype('int32')
exe = fluid.Executor(place=fluid.CPUPlace())
grid_x, grid_y = paddle.tensor.meshgrid([x, y])
res_1, res_2 = exe.run(fluid.default_main_program(),
feed={'x': input_1,
'y': input_2},
fetch_list=[grid_x, grid_y])
#the shape of res_1 is (100, 200)
#the shape of res_2 is (100, 200)
.. code-block:: python
#动态图示例
import paddle
import paddle.fluid as fluid
import numpy as np
paddle.enable_imperative()
input_3 = np.random.randint(0, 100, [100, ]).astype('int32')
input_4 = np.random.randint(0, 100, [200, ]).astype('int32')
with fluid.dygraph.guard():
tensor_3 = fluid.dygraph.to_variable(input_3)
tensor_4 = fluid.dygraph.to_variable(input_4)
grid_x, grid_y = paddle.tensor.meshgrid([tensor_3, tensor_4])
tensor_3 = paddle.imperative.to_variable(input_3)
tensor_4 = paddle.imperative.to_variable(input_4)
grid_x, grid_y = paddle.tensor.meshgrid(tensor_3, tensor_4)
#the shape of grid_x is (100, 200)
#the shape of grid_y is (100, 200)
......@@ -3,57 +3,61 @@
min
-------------------------------
.. py:function:: paddle.tensor.min(input, dim=None, keep_dim=False, out=None, name=None)
.. py:function:: paddle.tensor.min(x, axis=None, keepdim=False, name=None)
:alias_main: paddle.min
:alias: paddle.min,paddle.tensor.min,paddle.tensor.math.min
:update_api: paddle.fluid.layers.reduce_min
该OP是对指定维度上的Tensor元素求最小值运算,并输出相应的计算结果。
该OP是对指定维度上的Tensor元素求最小值运算,并输出相应的计算结果。等价于 :ref:`cn_api_fluid_layers_reduce_min`
参数:
- **input** (Variable)- 输入变量为多维Tensor或LoDTensor,支持数据类型为float32,float64,int32,int64。
- **dim** (list | int ,可选)- 求最小值运算的维度。如果为None,则计算所有元素的最小值并返回包含单个元素的Tensor变量,否则必须在 :math:`[−rank(input),rank(input)]` 范围内。如果 :math:`dim [i] <0` ,则维度将变为 :math:`rank+dim[i]` ,默认值为None。
- **keep_dim** (bool)- 是否在输出Tensor中保留减小的维度。如 keep_dim 为true,否则结果张量的维度将比输入张量小,默认值为False。
- **out** (Variable, 可选) - 指定算子输出结果的LoDTensor/Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
参数
:::::::::
- **x** (Tensor)- Tensor,支持数据类型为float32,float64,int32,int64。
- **axis** (list | int ,可选)- 求最小值运算的维度。如果为None,则计算所有元素的最小值并返回包含单个元素的Tensor变量,否则必须在 :math:`[−x.ndim, x.ndim]` 范围内。如果 :math:`axis[i] < 0` ,则维度将变为 :math:`x.ndim+axis[i]` ,默认值为None。
- **keepdim** (bool)- 是否在输出Tensor中保留减小的维度。如果keepdim 为False,结果张量的维度将比输入张量的小,默认值为False。
- **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回: 在指定dim上进行求最小值运算的Tensor,数据类型和输入数据类型一致。
返回类型: 变量(Variable)
返回
:::::::::
Tensor,在指定axis上进行求最小值运算的Tensor,数据类型和输入数据类型一致。
**代码示例**
代码示例
::::::::::
.. code-block:: python
import numpy as np
import paddle
import paddle.fluid as fluid
# x是一个Tensor,元素如下:
# [[0.2, 0.3, 0.5, 0.9]
# [0.1, 0.2, 0.6, 0.7]]
# 接下来的示例中,我们在每处函数调用后面都标注出了它的结果张量。
x = fluid.data(name='x', shape=[2, 4], dtype='float32')
# paddle.min 等价于 paddle.tensor.min
paddle.min(x) # [0.1]
paddle.min(x, dim=0) # [0.1, 0.2, 0.5, 0.7]
paddle.min(x, dim=-1) # [0.2, 0.1]
paddle.min(x, dim=1, keep_dim=True) # [[0.2], [0.1]]
# y是一个shape为[2, 2, 2]的Tensor,元素如下:
# [[[1.0, 2.0], [3.0, 4.0]],
# [[5.0, 6.0], [7.0, 8.0]]]
# 接下来的示例中,我们在每处函数调用后面都标注出了它的结果张量。
y = fluid.data(name='y', shape=[2, 2, 2], dtype='float32')
paddle.min(y, dim=[1, 2]) # [1.0, 5.0]
paddle.min(y, dim=[0, 1]) # [1.0, 2.0]
paddle.disable_static()
# data_x is a variable with shape [2, 4]
# the axis is a int element
data_x = np.array([[0.2, 0.3, 0.5, 0.9],
[0.1, 0.2, 0.6, 0.7]])
x = paddle.to_variable(data_x)
result1 = paddle.min(x)
print(result1.numpy())
#[0.1]
result2 = paddle.min(x, axis=0)
print(result2.numpy())
#[0.1 0.2 0.5 0.7]
result3 = paddle.min(x, axis=-1)
print(result3.numpy())
#[0.2 0.1]
result4 = paddle.min(x, axis=1, keepdim=True)
print(result4.numpy())
#[[0.2]
# [0.1]]
# data_y is a variable with shape [2, 2, 2]
# the axis is list
data_y = np.array([[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]]])
y = paddle.to_variable(data_y)
result5 = paddle.min(y, axis=[1, 2])
print(result5.numpy())
#[1. 5.]
result6 = paddle.min(y, axis=[0, 1])
print(result6.numpy())
#[1. 2.]
.. _cn_api_paddle_tensor_minimum:
minimum
-------------------------------
.. py:function:: paddle.tensor.minimum(x, y, axis=-1, name=None)
:alias_main: paddle.minimum
:alias: paddle.minimum,paddle.tensor.minimum,paddle.tensor.math.minimum
该OP逐元素对比输入的两个多维Tensor,并且把各个位置更小的元素保存到返回结果中。
等式是:
.. math::
Out = min(X, Y)
- :math:`X` :多维Tensor。
- :math:`Y` :多维Tensor。
此运算算子有两种情况:
1. :math:`Y` 的 ``shape`` 与 :math:`X` 相同。
2. :math:`Y` 的 ``shape`` 是 :math:`X` 的连续子序列。
对于情况2:
1. 用 :math:`Y` 的 ``shape`` 匹配 :math:`X` 的 ``shape``,其中 ``axis`` 是 :math:`Y` 在 :math:`X` 上的起始维度的位置。
2. 如果 ``axis`` < 0(默认值为-1),则 :math:`axis = abs(X.ndim - Y.ndim) - axis - 1` 。
3. 考虑到子序列, :math:`Y` 的大小为1的尾部维度将被忽略,例如shape(Y)=(2,1)=>(2)。
例如:
.. code-block:: text
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
具体的飞桨的广播(broadcasting)机制可以参考 `<<PaddlePaddle广播机制文档>> <https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/beginners_guide/basic_concept/broadcasting.rst>`_ 。
参数
:::::::::
- **x** (Tensor)- 多维Tensor。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64`` 。
- **y** (Tensor)- 多维Tensor。数据类型为 ``float32`` 、 ``float64`` 、 ``int32`` 或 ``int64`` 。
- **axis** (int32, 可选)- Y的维度对应到X维度上时的索引。默认值为 -1。
- **name** (string, 可选)- 输出的名字。默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回
:::::::::
Tensor,维度和数据类型与 ``x`` 相同的多维Tensor。
代码示例
::::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x_data = np.array([[1, 2], [3, 4]], dtype=np.float32)
y_data = np.array([[5, 6], [7, 8]], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.minimum(x, y)
print(res.numpy())
#[[1. 2.]
# [3. 4.]]
x_data = np.array([[[1, 2, 3], [1, 2, 3]]], dtype=np.float32)
y_data = np.array([1, 2], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.minimum(x, y, axis=1)
print(res.numpy())
#[[[1. 1. 1.]
# [2. 2. 2.]]]
x_data = np.array([2, 3, 5], dtype=np.float32)
y_data = np.array([1, 4, np.nan], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.minimum(x, y)
print(res.numpy())
#[ 1. 3. nan]
x_data = np.array([5, 3, np.inf], dtype=np.float32)
y_data = np.array([1, 4, 5], dtype=np.float32)
x = paddle.to_variable(x_data)
y = paddle.to_variable(y_data)
res = paddle.minimum(x, y)
print(res.numpy())
#[1. 3. 5.]
......@@ -7,7 +7,6 @@ norm
:alias_main: paddle.norm
:alias: paddle.norm,paddle.tensor.norm,paddle.tensor.linalg.norm
:update_api: paddle.fluid.layers.l2_normalize
......
.. _cn_api_tensor_cn_not_equal:
.. _cn_api_tensor_not_equal:
not_equal
-------------------------------
:doc_source: paddle.fluid.layers.not_equal
.. py:function:: paddle.not_equal(x, y, name=None)
:alias_main: paddle.not_equal
:alias: paddle.not_equal,paddle.tensor.not_equal,paddle.tensor.logic.not_equal
该OP返回 :math:`x!=y` 逐元素比较x和y是否相等,相同位置的元素不相同则返回True,否则返回False。使用重载算子 `!=` 可以有相同的计算函数效果
**注:该OP输出的结果不返回梯度。**
参数:
- **x** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64,int32, int64。
- **y** (Tensor) - 输入Tensor,支持的数据类型包括 float32, float64, int32, int64。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的Tensor,输出Tensor的shape和输入一致,Tensor数据类型为bool。
返回类型:变量(Tensor)
**代码示例**:
.. code-block:: python
import numpy as np
import paddle
import paddle.imperative as imperative
paddle.enable_imperative()
x = imperative.to_variable(np.array([1, 2, 3]))
y = imperative.to_variable(np.array([1, 3, 2]))
result1 = paddle.not_equal(x, y)
print(result1.numpy()) # result1 = [False True True]
......@@ -3,32 +3,44 @@
ones
-------------------------------
.. py:function:: paddle.ones(shape, dtype, out=None, device=None)
:alias_main: paddle.ones
:alias: paddle.ones,paddle.tensor.ones,paddle.tensor.creation.ones
:update_api: paddle.fluid.layers.ones
.. py:function:: paddle.ones(shape, dtype=None)
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为1的Tensor。
参数:
- **shape** (tuple|list) - 输出Tensor的形状。
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。
- **out** (Variable, 可选) – 指定存储运算结果的Tensor。如果设置为None或者不设置,将创建新的Tensor存储运算结果,默认值为None。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
- **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选) - 输出Tensor的数据类型,数据类型必须为bool、 float16、float32、float64、int32或int64。如果 ``dtype`` 为None,默认数据类型为float32。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:值全为1的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable
抛出异常:
- ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64和None时。
- ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor的时, 当 ``shape`` 为Tensor时,其数据类型不是int32或者int64。
**代码示例**:
.. code-block:: python
import paddle
data = paddle.ones(shape=[3, 2], dtype='float32') # [[1., 1.], [1., 1.], [1., 1.]]
data = paddle.ones(shape=[2, 2], dtype='float32', device='cpu') # [[1., 1.], [1., 0.]]
paddle.enable_imperative()
#default dtype for ones OP
data1 = paddle.ones(shape=[3, 2])
# [[1. 1.]
# [1. 1.]
# [1. 1.]]
data2 = paddle.ones(shape=[2, 2], dtype='int32')
# [[1 1]
# [1 1]]
#attr shape is a Variable Tensor
shape = paddle.fill_constant(shape=[2], dtype='int32', value=2)
data3 = paddle.ones(shape=shape, dtype='int32')
# [[1 1]
# [1 1]]
......@@ -3,33 +3,37 @@
ones_like
-------------------------------
.. py:function:: paddle.ones_like(input, dtype=None, device=None, name=None)
.. py:function:: paddle.ones_like(x, dtype=None, name=None)
:alias_main: paddle.ones_like
:alias: paddle.ones_like,paddle.tensor.ones_like,paddle.tensor.creation.ones_like
:update_api: paddle.fluid.layers.ones_like
:alias: paddle.tensor.ones_like, paddle.tensor.creation.ones_like
该OP返回一个和 ``x`` 具有相同形状的数值都为1的Tensor,数据类型为 ``dtype`` 或者和 ``x`` 相同。
参数
::::::::::
- **x** (Tensor) – 输入的Tensor,数据类型可以是bool,float16, float32,float64,int32,int64。输出Tensor的形状和 ``x`` 相同。如果 ``dtype`` 为None,则输出Tensor的数据类型与 ``x`` 相同。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持bool,float16, float32,float64,int32,int64。当该参数值为None时, 输出Tensor的数据类型与 ``x`` 相同。默认值为None.
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回
::::::::::
Tensor:和 ``x`` 具有相同形状的数值都为1的Tensor,数据类型为 ``dtype`` 或者和 ``x`` 相同。
该OP创建一个和input具有相同的形状和数据类型的全1Tensor。
抛出异常
::::::::::
- ``TypeError`` - 如果 ``dtype`` 不是bool、float16、float32、float64、int32、int64。
参数:
- **input** (Variable) – 指定输入为一个多维的Tensor,数据类型可以是bool,float32,float64,int32,int64。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若参数为空,则输出变量的数据类型和输入变量相同,默认值为None。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:返回一个存储结果的Tensor。
返回类型:Variable
**代码示例**:
代码示例
::::::::::
.. code-block:: python
import paddle
import paddle.fluid as fluid
x = fluid.data(name='x', dtype='float32', shape=[3])
data = paddle.ones_like(x) # data=[1.0, 1.0, 1.0]
data1 = paddle.ones_like(input=x, device="gpu") # data1=[1.0, 1.0. 1.0]
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.array([1,2,3], dtype='float32'))
out1 = paddle.ones_like(x) # [1., 1., 1.]
out2 = paddle.ones_like(x, dtype='int32') # [1, 1, 1]
.. _cn_api_tensor_random_rand:
rand
-------------------------------
**版本升级,文档正在开发中**
----------------------
.. py:function:: paddle.rand(shape, dtype=None, name=None)
:alias_main: paddle.rand
:alias: paddle.tensor.rand, paddle.tensor.random.rand
该OP返回符合均匀分布的,范围在[0, 1)的Tensor,形状为 ``shape``,数据类型为 ``dtype``。
参数
::::::::::
- **shape** (list|tuple|Tensor) - 生成的随机Tensor的形状。如果 ``shape`` 是list、tuple,则其中的元素可以是int,或者是形状为[1]且数据类型为int32、int64的Tensor。如果 ``shape`` 是Tensor,则是数据类型为int32、int64的1-D Tensor。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持float32、float64。当该参数值为None时, 输出Tensor的数据类型为float32。默认值为None.
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回
::::::::::
Tensor: 符合均匀分布的范围为[0, 1)的随机Tensor,形状为 ``shape``,数据类型为 ``dtype``。
抛出异常
::::::::::
- ``TypeError`` - 如果 ``shape`` 的类型不是list、tuple、Tensor。
- ``TypeError`` - 如果 ``dtype`` 不是float32、float64。
示例代码
::::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
# example 1: attr shape is a list which doesn't contain Tensor.
result_1 = paddle.rand(shape=[2, 3])
# [[0.451152 , 0.55825245, 0.403311 ],
# [0.22550228, 0.22106001, 0.7877319 ]]
# example 2: attr shape is a list which contains Tensor.
dim_1 = paddle.fill_constant([1], "int64", 2)
dim_2 = paddle.fill_constant([1], "int32", 3)
result_2 = paddle.rand(shape=[dim_1, dim_2, 2])
# [[[0.8879919 0.25788337]
# [0.28826773 0.9712097 ]
# [0.26438272 0.01796806]]
# [[0.33633623 0.28654453]
# [0.79109055 0.7305809 ]
# [0.870881 0.2984597 ]]]
# example 3: attr shape is a Tensor, the data type must be int64 or int32.
var_shape = paddle.imperative.to_variable(np.array([2, 3]))
result_3 = paddle.rand(var_shape)
# [[0.22920267 0.841956 0.05981819]
# [0.4836288 0.24573246 0.7516129 ]]
......@@ -3,60 +3,70 @@
randint
-------------------------------
.. py:function:: paddle.randint(low, high=None, shape=None, out=None, dtype=None, device=None, stop_gradient=False, seed=0, name=None)
.. py:function:: paddle.randint(low=0, high=None, shape=[1], dtype=None, name=None)
:alias_main: paddle.randint
:alias: paddle.randint,paddle.tensor.randint,paddle.tensor.random.randint
:alias: paddle.tensor.randint, paddle.tensor.random.randint
该OP使用从区间[low,high)内均匀分布采样的随机整数初始化一个Tensor。当high为None时(默认),均匀采样的区间为[0,low)。
该OP返回服从均匀分布的、范围在[``low``, ``high``)的随机Tensor,形状为 ``shape``,数据类型为 ``dtype``。当 ``high`` 为None时(默认),均匀采样的区间为[0, ``low``)。
参数:
- **low** (int)-要生成的随机值范围的下限,low包含在范围中。当high为None时,均匀采样的区间为[0,low)。
- **high** (int,可选)-要生成的随机值范围的上限,high不包含在范围中。默认值为None。
- **shape** (list|tuple|Variable,可选)-输出Tensor的维度,shape类型支持list,tuple,Variable。如果shape类型是list或者tuple,它的元素可以是整数或者形状为[1]的Tensor,其中整数的数据类型为int,Tensor的数据类型为int32或int64。如果shape的类型是Variable,则是1D的Tensor,Tensor的数据类型为int32或int64。如果shape为None,则会将shape设置为[1]。默认值为None。
- **out** (Variable,可选)-用于存储创建的Tensor,可以是程序中已经创建的任何Variable。默认值为None,此时将创建新的Variable来保存输出结果。
- **dtype** (np.dtype|core.VarDesc.VarType|str,可选)- 输出Tensor的数据类型,支持数据类型为int32,int64。如果dtype为None,则会将dtype设置为int64。默认值为None。
- **device** (str, 可选)-指定在GPU或CPU上创建Tensor。如果device为None,则将选择运行Paddle程序的设备,默认为None。
- **stop_gradient** (bool,可选)-指定是否停止梯度计算,默认值为False。
- **seed** (int,可选)-随机种子,用于生成样本。0表示使用系统生成的种子。注意如果种子不为0,该操作符每次都生成同样的随机数。默认为 0。
- **name** (str,可选)-具体用法请参见:ref:`api_guide_Name` ,一般无需设置,默认值为None。
参数
::::::::::
- **low** (int) - 要生成的随机值范围的下限,``low`` 包含在范围中。当 ``high`` 为None时,均匀采样的区间为[0, ``low``)。默认值为0。
- **high** (int, 可选) - 要生成的随机值范围的上限,``high`` 不包含在范围中。默认值为None,此时范围是[0, ``low``)。
- **shape** (list|tuple|Tensor) - 生成的随机Tensor的形状。如果 ``shape`` 是list、tuple,则其中的元素可以是int,或者是形状为[1]且数据类型为int32、int64的Tensor。如果 ``shape`` 是Tensor,则是数据类型为int32、int64的1-D Tensor。。默认值为[1]。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持int32、int64。当该参数值为None时, 输出Tensor的数据类型为int64。默认值为None.
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回:表示一个随机初始化结果的Tensor,该Tensor的数据类型由dtype参数决定,该Tensor的维度由shape参数决定。
返回
::::::::::
Tensor:从区间[``low``,``high``)内均匀分布采样的随机Tensor,形状为 ``shape``,数据类型为 ``dtype``。
返回类型:Variable
抛出异常
::::::::::
- ``TypeError`` - 如果 ``shape`` 的类型不是list、tuple、Tensor。
- ``TypeError`` - 如果 ``dtype`` 不是int32、int64。
- ``ValueError`` - 如果 ``high`` 不大于 ``low``;或者 ``high`` 为None,且 ``low`` 不大于0。
抛出异常:
- :code:`TypeError`: shape的类型应该是list、tuple 或 Variable。
- :code:`TypeError`: dtype的类型应该是int32或int64。
- :code:`ValueError`: 该OP的high必须大于low(high为None时,则会先将high设置为low,将low设置为0,再判断low和high的大小关系)。
**代码示例**:
代码示例
:::::::::::
.. code-block:: python
import paddle.fluid as fluid
import paddle
import numpy as np
paddle.enable_imperative()
# example 1:
# attr shape is a list which doesn't contain tensor Variable.
result_1 = paddle.randint(low=-5, high=5, shape=[3, 4], dtype="int64")
# attr shape is a list which doesn't contain Tensor.
result_1 = paddle.randint(low=-5, high=5, shape=[3])
# [0, -3, 2]
# example 2:
# attr shape is a list which contains tensor Variable.
dim_1 = fluid.layers.fill_constant([1],"int64",3)
dim_2 = fluid.layers.fill_constant([1],"int32",5)
# attr shape is a list which contains Tensor.
dim_1 = paddle.fill_constant([1], "int64", 2)
dim_2 = paddle.fill_constant([1], "int32", 3)
result_2 = paddle.randint(low=-5, high=5, shape=[dim_1, dim_2], dtype="int32")
print(result_2.numpy())
# [[ 0, -1, -3],
# [ 4, -2, 0]]
# example 3:
# attr shape is a Variable, the data type must be int64 or int32.
var_shape = fluid.data(name='var_shape', shape=[2], dtype="int64")
result_3 = paddle.randint(low=-5, high=5, shape=var_shape, dtype="int32")
var_shape_int32 = fluid.data(name='var_shape_int32', shape=[2], dtype="int32")
result_4 = paddle.randint(low=-5, high=5, shape=var_shape_int32, dtype="int64")
# attr shape is a Tensor
var_shape = paddle.imperative.to_variable(np.array([3]))
result_3 = paddle.randint(low=-5, high=5, shape=var_shape)
# [-2, 2, 3]
# example 4:
# date type is int32
result_4 = paddle.randint(low=-5, high=5, shape=[3], dtype='int32')
# [-5, 4, -4]
# example 5:
# Input only one parameter
# low=0, high=10, shape=[1], dtype='int64'
result_4 = paddle.randint(10)
result_5 = paddle.randint(10)
# [7]
......@@ -3,52 +3,58 @@
randn
-------------------------------
.. py:function:: paddle.tensor.random.randn(shape, out=None, dtype=None, device=None, stop_gradient=True, name=None)
.. py:function:: paddle.randn(shape, dtype=None, name=None)
:alias_main: paddle.randn
:alias: paddle.randn,paddle.tensor.randn,paddle.tensor.random.randn
:alias: paddle.tensor.randn, paddle.tensor.random.randn
API 用于生成数据符合标准正态随机分布(均值为 0,方差为 1 的正态随机分布)的 Tensor
OP返回符合标准正态分布(均值为0,标准差为1的正态随机分布)的随机Tensor,形状为 ``shape``,数据类型为 ``dtype``
参数:
- **shape** (list|tuple): 生成的随机 Tensor 的形状。
- **out** (Variable, optional): 用于存储创建的 Tensor,可以是程序中已经创建的任何Variable。当该参数值为 `None` 时,将创建新的 Variable 来保存输出结果。默认值为 None。
- **dtype** (np.dtype|core.VarDesc.VarType|str, optional): 输出 Tensor 的数据类型,可选值为 float32,float64。当该参数值为 `None` 时, 输出当 Tensor 的数据类型为 `float32` 。默认值为 None.
- **device** (str, optional): 用于指定输出变量是保存在 CPU 还是 GPU 内存中。可选值为 None,'cpu','gpu'。当该参数为 None 时, 输出变量将会自动的分配到相对应内存中。默认值为 None。
- **stop_gradient** (bool, optional): 是否停止输出当前变量(输出变量)的梯度值。默认值为 True。
- **name** (str, optional): 该参数供开发人员打印调试信息时使用,具体用法参见 :ref:`api_guide_Name` ,默认值为None。
参数
::::::::::
- **shape** (list|tuple|Tensor) - 生成的随机Tensor的形状。如果 ``shape`` 是list、tuple,则其中的元素可以是int,或者是形状为[1]且数据类型为int32、int64的Tensor。如果 ``shape`` 是Tensor,则是数据类型为int32、int64的1-D Tensor。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持float32、float64。当该参数值为None时, 输出Tensor的数据类型为float32。默认值为None.
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回:符合标准正态分布的随机 Tensor。形状为 shape,数据类型为 dtype。
返回
::::::::::
Tensor:符合标准正态分布的随机Tensor,形状为 ``shape``,数据类型为 ``dtype``。
返回类型:Variable
抛出异常
::::::::::
- ``TypeError`` - 如果 ``shape`` 的类型不是list、tuple、Tensor。
- ``TypeError`` - 如果 ``dtype`` 不是float32、float64。
**示例代码**
示例代码
::::::::::
.. code-block:: python
# declarative mode
import paddle
import paddle.fluid as fluid
data = paddle.randn([2, 4])
place = fluid.CPUPlace()
exe = fluid.Executor(place)
res, = exe.run(fluid.default_main_program(), feed={}, fetch_list=[data])
print(res)
# [[-1.4187592 0.7368311 -0.53748125 -0.0146909 ]
# [-0.66294265 -1.3090698 0.1898754 -0.14065823]]
.. code-block:: python
# imperative mode
import paddle
import paddle.fluid as fluid
import paddle.fluid.dygraph as dg
place = fluid.CPUPlace()
with dg.guard(place) as g:
x = paddle.randn([2, 4])
x_np = x.numpy()
print(x_np)
# [[ 1.5149173 -0.26234224 -0.592486 1.4523455 ]
# [ 0.04581212 -0.85345626 1.1687907 -0.02512913]]
import numpy as np
paddle.enable_imperative()
# example 1: attr shape is a list which doesn't contain Tensor.
result_1 = paddle.randn(shape=[2, 3])
# [[-2.923464 0.11934398 -0.51249987]
# [ 0.39632758 0.08177969 0.2692008 ]]
# example 2: attr shape is a list which contains Tensor.
dim_1 = paddle.fill_constant([1], "int64", 2)
dim_2 = paddle.fill_constant([1], "int32", 3)
result_2 = paddle.randn(shape=[dim_1, dim_2, 2])
# [[[-2.8852394 -0.25898588]
# [-0.47420555 0.17683524]
# [-0.7989969 0.00754541]]
# [[ 0.85201347 0.32320443]
# [ 1.1399018 0.48336947]
# [ 0.8086993 0.6868893 ]]]
# example 3: attr shape is a Tensor, the data type must be int64 or int32.
var_shape = paddle.imperative.to_variable(np.array([2, 3]))
result_3 = paddle.randn(var_shape)
# [[-2.878077 0.17099959 0.05111201]
# [-0.3761474 -1.044801 1.1870178 ]]
......@@ -3,49 +3,39 @@
randperm
-------------------------------
.. py:function:: paddle.tensor.random.randperm(n, out=None, dtype="int64", device=None, stop_gradient=True, seed=0)
.. py:function:: paddle.randperm(n, dtype="int64", name=None)
:alias_main: paddle.randperm
:alias: paddle.randperm,paddle.tensor.randperm,paddle.tensor.random.randperm
:alias: paddle.tensor.randperm, paddle.tensor.random.randperm
该OP返回一个数值在0到n-1、顺序随机的整数排列。
该OP返回一个数值在0到n-1、随机排列的1-D Tensor,数据类型为 ``dtype``。
参数:
- **n** (int): 整数排列的上限,应该大于0。
- **out** (Variable, optional): 可选的输出变量,如果不为 `None` ,返回的整数排列保存在该变量中,默认是 `None` 。
- **dtype** (np.dtype|core.VarDesc.VarType|str, optional): 整数排列的数据类型,支持 `int64` 和 `int32` ,默认是 `int64` 。
- **device** (str, optional): 指定整数排列所在的设备内存。设置为 `cpu` 则保存在 `cpu` 内存中,设置为 `gpu` ,则保存在 `gpu` 内存中,设置为 `None` 则保存在运行的设备内存中。默认是 `None` 。
- **stop_gradient** (bool, optional): 返回的整数排列是否记录并更新梯度,默认是 `True` 。
- **seed** (int, optional): 设置随机种子。`seed` 等于0时,每次返回不同的整数排列;`seed` 不等于0时,相同的 `seed` 返回相同的整数排列。
::::::::::
- **n** (int) - 随机序列的上限(不包括在序列中),应该大于0。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持int32、int64、float32、float64。默认值为"int64".
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回: 一个数值在0到n-1、顺序随机的整数排列。
返回
::::::::::
Tensor:一个数值在0到n-1、随机排列的1-D Tensor,数据类型为 ``dtype`` 。
返回类型: Variable
抛出异常
::::::::::
- ValueError - 如果 ``n`` 不大于0.
- TypeError - 如果 ``dtype`` 不是int32、int64、float32、float64.
**代码示例**:
代码示例
::::::::::
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
# Note that, the random permutation returned by randperm depends
# the random seed in computer, so the output in the next example
# will be change.
with fluid.dygraph.guard():
out_1 = paddle.randperm(6)
print(out_1.numpy()) # Random permutation, for example [2 4 5 0 3 1]
out_2 = fluid.dygraph.to_variable(
np.array([0, 1, 2, 3])).astype(np.int64)
paddle.randperm(6, out_2)
print(out_2.numpy()) # Random permutation, for example [5 0 2 4 1 3]
out_3 = paddle.randperm(6, dtype="int32", device="cpu")
print(out_3.numpy()) # Random permutation, for example [3 1 4 2 5 0]
out_4 = paddle.randperm(6, device="cpu", stop_gradient=True)
print(out_4.numpy()) # Random permutation, for example [3 1 5 2 0 4]
paddle.enable_imperative()
result_1 = paddle.randperm(5)
# [4 1 2 3 0]
result_2 = paddle.randperm(7, 'int32')
# [1 6 2 0 4 3 5]
......@@ -3,19 +3,20 @@
roll
-------------------------------
.. py:function:: paddle.roll(input, shifts, dims=None):
.. py:function:: paddle.roll(x, shifts, axis=None, name=None):
:alias_main: paddle.roll
:alias: paddle.roll,paddle.tensor.roll,paddle.tensor.manipulation.roll
:alias: paddle.roll, paddle.tensor.roll, paddle.tensor.manipulation.roll
该OP沿着指定维度对输入 ``input`` 进行循环滚动,当元素移动到最后位置时,会从第一个位置重新插入。如果 ``dims`` 为 ``None`` ,则输入在被循环滚动之前,会先展平成 ``1-D Tensor`` ,滚动操作完成后恢复成原来的形状。
该OP沿着指定维度 ``axis`` 对输入 ``x`` 进行循环滚动,当元素移动到最后位置时,会从第一个位置重新插入。如果 ``axis`` 为 ``None`` ,则输入在被循环滚动之前,会先展平成 ``1-D Tensor`` ,滚动操作完成后恢复成原来的形状。
**参数**:
- **input** (Variable)– 输入张量。
- **shifts** (int|list|tuple) - 滚动位移。如果 ``shifts`` 是一个元组或者列表,则 ``dims`` 必须是相同大小的元组或者列表,输入张量将依次沿着每个维度滚动相应的数值。
- **dim** (int|list|tuple, optinal) – 滚动轴。
- **x** (Variable)– 输入张量。
- **shifts** (int|list|tuple) - 滚动位移。如果 ``shifts`` 是一个元组或者列表,则 ``axis`` 必须是相同大小的元组或者列表,输入张量将依次沿着每个维度滚动相应的数值。
- **axis** (int|list|tuple, optinal) – 滚动轴。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
**返回**:
- **Variable**,数据类型同输入。
......@@ -26,19 +27,18 @@ roll
import numpy as np
import paddle
import paddle.fluid as fluid
data = np.array([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]])
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(data)
paddle.enable_imperative()
x = paddle.imperative.to_variable(data)
out_z1 = paddle.roll(x, shifts=1)
print(out_z1.numpy())
#[[9. 1. 2.]
# [3. 4. 5.]
# [6. 7. 8.]]
out_z2 = paddle.roll(x, shifts=1, dims=0)
out_z2 = paddle.roll(x, shifts=1, axis=0)
print(out_z2.numpy())
#[[7. 8. 9.]
# [1. 2. 3.]
......
......@@ -3,73 +3,61 @@
sort
-------------------------------
.. py:function:: paddle.sort(input, axis=-1, descending=False, out=None, name=None)
.. py:function:: paddle.sort(x, axis=-1, descending=False, name=None)
:alias_main: paddle.sort
:alias: paddle.sort,paddle.tensor.sort,paddle.tensor.search.sort
:update_api: paddle.fluid.layers.argsort
对输入变量沿给定轴进行排序,输出排序好的数据和相应的索引,其维度和输入相同。**默认升序排列,如果需要降序排列设置** ``descending=True`` 。
对输入变量沿给定轴进行排序,输出排序好的数据,其维度和输入相同。默认升序排列,如果需要降序排列设置 ``descending=True`` 。
参数:
- **input** (Variable) - 输入的多维 ``Tensor`` ,支持的数据类型:float32、float64、int16、int32、int64、uint8。
- **x** (Tensor) - 输入的多维 ``Tensor`` ,支持的数据类型:float32、float64、int16、int32、int64、uint8。
- **axis** (int,可选) - 指定对输入Tensor进行运算的轴, ``axis`` 的有效范围是[-R, R),R是输入 ``x`` 的Rank, ``axis`` 为负时与 ``axis`` +R 等价。默认值为0。
- **descending** (bool,可选) - 指定算法排序的方向。如果设置为True,算法按照降序排序。如果设置为False或者不设置,按照升序排序。默认值为False。
- **out** (Variable, 可选) – 指定存储运算结果的Tensor(与 ``input`` 维度相同、数据类型相同)。如果设置为None或者不设置,将创建新的Tensor存储运算结果,默认值为None。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:一组已排序的输出(与 ``input`` 维度相同、数据类型相同)和索引(数据类型为int64)。
返回:Tensor, 排序后的输出(与 ``x`` 维度相同、数据类型相同)。
返回类型:tuple[Variable]
**代码示例**:
.. code-block:: python
import paddle
import paddle.fluid as fluid
import paddle.imperative as imperative
import numpy as np
in1 = np.array([[[5,8,9,5],
paddle.enable_imperative()
input_array = np.array([[[5,8,9,5],
[0,0,1,7],
[6,9,2,4]],
[[5,2,4,2],
[4,7,7,9],
[1,7,0,6]]]).astype(np.float32)
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(in1)
out1 = paddle.sort(input=x, axis=-1) # same as axis==2
out2 = paddle.sort(input=x, axis=0)
out3 = paddle.sort(input=x, axis=1)
print(out1[0].numpy())
# [[[5. 5. 8. 9.]
x = imperative.to_variable(input_array)
out1 = paddle.sort(x=x, axis=-1)
out2 = paddle.sort(x=x, axis=0)
out3 = paddle.sort(x=x, axis=1)
print(out1.numpy())
#[[[5. 5. 8. 9.]
# [0. 0. 1. 7.]
# [2. 4. 6. 9.]]
# [[2. 2. 4. 5.]
# [4. 7. 7. 9.]
# [0. 1. 6. 7.]]]
print(out1[1].numpy())
# [[[0 3 1 2]
# [0 1 2 3]
# [2 3 0 1]]
# [[1 3 2 0]
# [0 1 2 3]
# [2 0 3 1]]]
print(out2[0].numpy())
# [[[5. 2. 4. 2.]
print(out2.numpy())
#[[[5. 2. 4. 2.]
# [0. 0. 1. 7.]
# [1. 7. 0. 4.]]
# [[5. 8. 9. 5.]
# [4. 7. 7. 9.]
# [6. 9. 2. 6.]]]
print(out3[0].numpy())
# [[[0. 0. 1. 4.]
print(out3.numpy())
#[[[0. 0. 1. 4.]
# [5. 8. 2. 5.]
# [6. 9. 9. 7.]]
# [[1. 2. 0. 2.]
# [4. 7. 4. 6.]
# [5. 7. 7. 9.]]]
......@@ -2,39 +2,55 @@
split
-------------------------------
.. py:function:: paddle.tensor.split(input, num_or_sections, dim=-1, name=None)
:alias_main: paddle.split
:alias: paddle.split,paddle.tensor.split,paddle.tensor.manipulation.split
:update_api: paddle.fluid.layers.split
.. py:function:: paddle.tensor.split(x, num_or_sections, axis=0, name=None)
该OP将输入Tensor分割成多个子Tensor。
**参数**:
- **input** (Variable) - 输入变量,数据类型为float32,float64,int32,int64的多维Tensor或者LoDTensor。
- **num_or_sections** (int|list|tuple) - 如果 num_or_sections 是一个整数,则表示Tensor平均划分为相同大小子Tensor的数量。如果 num_or_sections 是一个list或tuple,那么它的长度代表子Tensor的数量,它的元素可以是整数或者形状为[1]的Tensor,依次代表子Tensor需要分割成的维度的大小。list或tuple的长度不能超过输入Tensor待分割的维度的大小。在list或tuple中,至多有一个元素值为-1,表示该值是由input的维度和其他num_or_sections中元素推断出来的。例如对一个维度为[4,6,6]Tensor的第三维进行分割时,指定num_or_sections=[2,-1,1],输出的三个Tensor维度分别为:[4,6,2],[4,6,3],[4,6,1]。
- **dim** (int|Variable,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果dim < 0,则划分的维度为rank(input) + dim。默认值为-1
- **name** (str,可选) - 一般无需设置,默认值为None。
- **x** (Tensor) - 输入变量,数据类型为bool, float16, float32,float64,int32,int64的多维Tensor。
- **num_or_sections** (int|list|tuple) - 如果 ``num_or_sections`` 是一个整数,则表示Tensor平均划分为相同大小子Tensor的数量。如果 ``num_or_sections`` 是一个list或tuple,那么它的长度代表子Tensor的数量,它的元素可以是整数或者形状为[1]的Tensor,依次代表子Tensor需要分割成的维度的大小。list或tuple的长度不能超过输入Tensor待分割的维度的大小。在list或tuple中,至多有一个元素值为-1,表示该值是由 ``x`` 的维度和其他 ``num_or_sections`` 中元素推断出来的。例如对一个维度为[4,6,6]Tensor的第三维进行分割时,指定 ``num_or_sections=[2,-1,1]`` ,输出的三个Tensor维度分别为:[4,6,2],[4,6,3],[4,6,1]。
- **axis** (int|Tensor,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果 ``axis < 0`` ,则划分的维度为 ``rank(x) + axis`` 。默认值为0
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
**返回**:分割后的Tensor列表。
返回:分割后的Tensor列表。
**返回类型**:列表(Variable(Tensor|LoDTensor)),数据类型为int32,int64,float32,float64。
抛出异常:
- :code:`TypeError`:``x`` 的数据类型不是float16、float32、float64、int32或int64时 。
- :code:`TypeError`:``num_or_sections`` 不是int、list 或 tuple时。
- :code:`TypeError`:``axis`` 不是 int 或 Tensor时。当 ``axis`` 为Tensor,其数据类型不是int32或int64时。
**代码示例**:
.. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
input_1 = np.random.random([4, 6, 6]).astype("int32")
# input is a variable which shape is [4, 6, 6]
input = fluid.dygraph.to_variable(input_1)
x0, x1, x2 = paddle.split(input, num_or_sections= 3, dim=1)
# x0.shape [4, 2, 6]
# x1.shape [4, 2, 6]
# x2.shape [4, 2, 6]
import paddle
paddle.enable_imperative()
# x is a Tensor which shape is [3, 9, 5]
x_np = np.random.random([3, 9, 5]).astype("int32")
x = paddle.imperative.to_variable(x_np)
out0, out1, out22 = paddle.split(x, num_or_sections=3, axis=1)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
out0, out1, out2 = paddle.split(x, num_or_sections=[2, 3, 4], axis=1)
# out0.shape [3, 2, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 4, 5]
out0, out1, out2 = paddle.split(x, num_or_sections=[2, 3, -1], axis=1)
# out0.shape [3, 2, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 4, 5]
# axis is negative, the real axis is (rank(x) + axis) which real
# value is 1.
out0, out1, out2 = paddle.split(x, num_or_sections=3, axis=-2)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
......@@ -3,10 +3,10 @@
trace
-------------------------------
.. py:function:: paddle.trace(input, offset=0, dim1=0, dim2=1)
.. py:function:: paddle.trace(x, offset=0, axis1=0, axis2=1, name=None)
:alias_main: paddle.trace
:alias: paddle.trace,paddle.tensor.trace,paddle.tensor.math.trace
:alias: paddle.trace, paddle.tensor.trace, paddle.tensor.math.trace
......@@ -14,7 +14,7 @@ trace
如果输入是 2D Tensor,则返回对角线元素之和。
如果输入的维度大于 2D,则返回一个由对角线元素之和组成的数组,其中对角线从由 dim1 和 dim2 指定的二维平面中获得。默认由输入的前两维组成获得对角线的 2D 平面。
如果输入的维度大于 2D,则返回一个由对角线元素之和组成的数组,其中对角线从由 axis1 和 axis2 指定的二维平面中获得。默认由输入的前两维组成获得对角线的 2D 平面。
参数 ``offset`` 确定从指定的二维平面中获取对角线的位置:
......@@ -23,10 +23,11 @@ trace
- 如果 offset < 0,则取主对角线左下的对角线。
参数:
- **input** (Variable)- 输入变量,至少为 2D 数组,支持数据类型为 float32,float64,int32,int64。
- **x** (Variable)- 输入张量,至少为 2D 数组,支持数据类型为 float32,float64,int32,int64。
- **offset** (int ,可选)- 从指定的二维平面中获取对角线的位置,默认值为 0,既主对角线。
- **dim1** (int , 可选)- 获取对角线的二维平面的第一维,默认值为 0。
- **dim2** (int , 可选)- 获取对角线的二维平面的第二维,默认值为 1。
- **axis1** (int , 可选)- 获取对角线的二维平面的第一维,默认值为 0。
- **axis2** (int , 可选)- 获取对角线的二维平面的第二维,默认值为 1。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回: 指定二维平面的对角线元素之和。数据类型和输入数据类型一致。
......@@ -36,18 +37,17 @@ trace
.. code-block:: python
import paddle.tensor as tensor
import paddle.fluid.dygraph as dg
import paddle
import numpy as np
case1 = np.random.randn(2, 3).astype('float32')
case2 = np.random.randn(3, 10, 10).astype('float32')
case3 = np.random.randn(3, 10, 5, 10).astype('float32')
with dg.guard():
case1 = dg.to_variable(case1)
case2 = dg.to_variable(case2)
case3 = dg.to_variable(case3)
data1 = tensor.trace(case1) # data1.shape = [1]
data2 = tensor.trace(case2, offset=1, dim1=1, dim2=2) # data2.shape = [3]
data3 = tensor.trace(case3, offset=-3, dim1=1, dim2=-1) # data2.shape = [3, 5]
paddle.enable_imperative()
case1 = paddle.imperative.to_variable(case1)
case2 = paddle.imperative.to_variable(case2)
case3 = paddle.imperative.to_variable(case3)
data1 = paddle.trace(case1) # data1.shape = [1]
data2 = paddle.trace(case2, offset=1, axis1=1, axis2=2) # data2.shape = [3]
data3 = paddle.trace(case3, offset=-3, axis1=1, axis2=-1) # data2.shape = [3, 5]
......@@ -3,31 +3,41 @@
zeros
-------------------------------
.. py:function:: paddle.zeros(shape, dtype, out=None, device=None)
:alias_main: paddle.zeros
:alias: paddle.zeros,paddle.tensor.zeros,paddle.tensor.creation.zeros
:update_api: paddle.fluid.layers.zeros
.. py:function:: paddle.zeros(shape, dtype=None, name=None)
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为0的Tensor。
参数:
- **shape** (tuple|list) - 输出Tensor的形状。
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。
- **out** (Variable, 可选) – 指定存储运算结果的Tensor。如果设置为None或者不设置,将创建新的Tensor存储运算结果,默认值为None。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
- **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64。
- **dtype** (np.dtype|core.VarDesc.VarType|str,可选) - 输出Tensor的数据类型,数据类型必须为bool、float16、float32、float64、int32或int64。若为None,数据类型为float32, 默认为None。
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回:值全为0的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable
抛出异常:
- ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64和None时。
- ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor时, 当 ``shape`` 为Tensor,其数据类型不是int32或者int64时。
**代码示例**:
.. code-block:: python
import paddle
data = paddle.zeros(shape=[3, 2], dtype='float32') # [[0., 0.], [0., 0.], [0., 0.]]
data = paddle.zeros(shape=[2, 2], dtype='float32', device='cpu') # [[0., 0.], [0., 0.]]
paddle.enable_imperative() # Now we are in imperative mode
data = paddle.zeros(shape=[3, 2], dtype='float32')
# [[0. 0.]
# [0. 0.]
# [0. 0.]]
data = paddle.zeros(shape=[2, 2])
# [[0. 0.]
# [0. 0.]]
# shape is a Tensor
shape = paddle.fill_constant(shape=[2], dtype='int32', value=2)
data3 = paddle.zeros(shape=shape, dtype='int32')
# [[0 0]
# [0 0]]
......@@ -3,33 +3,38 @@
zeros_like
-------------------------------
.. py:function:: paddle.zeros_like(input, dtype=None, device=None, name=None)
.. py:function:: paddle.zeros_like(x, dtype=None, name=None)
:alias_main: paddle.zeros_like
:alias: paddle.zeros_like,paddle.tensor.zeros_like,paddle.tensor.creation.zeros_like
:alias: paddle.tensor.zeros_like, paddle.tensor.creation.zeros_like
:update_api: paddle.fluid.layers.zeros_like
该OP返回一个和 ``x`` 具有相同的形状的全零Tensor,数据类型为 ``dtype`` 或者和 ``x`` 相同。
参数
::::::::::
- **x** (Tensor) – 输入的多维Tensor,数据类型可以是bool,float16, float32,float64,int32,int64。输出Tensor的形状和 ``x`` 相同。如果 ``dtype`` 为None,则输出Tensor的数据类型与 ``x`` 相同。
- **dtype** (str|np.dtype|core.VarDesc.VarType, 可选) - 输出Tensor的数据类型,支持bool,float16, float32,float64,int32,int64。当该参数值为None时, 输出Tensor的数据类型与 ``x`` 相同。默认值为None.
- **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
返回
::::::::::
Tensor:和 ``x`` 具有相同的形状全零Tensor,数据类型为 ``dtype`` 或者和 ``x`` 相同。
该OP创建一个和input具有相同的形状和数据类型的全零Tensor。
抛出异常
::::::::::
- ``TypeError`` - 如果 ``dtype`` 不是bool、float16、float32、float64、int32、int64。
参数:
- **input** (Variable) – 指定输入为一个多维的Tensor,数据类型可以是bool,float32,float64,int32,int64。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若参数为空,则输出变量的数据类型和输入变量相同,默认值为None。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:返回一个存储结果的Tensor。
返回类型:Variable
**代码示例**:
代码示例
::::::::::
.. code-block:: python
import paddle
import paddle.fluid as fluid
x = fluid.data(name='x', dtype='float32', shape=[3])
data1 = paddle.ones_like(input=x, device="gpu") # data1=[1.0, 1.0. 1.0]
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.array([1,2,3], dtype='float32'))
out1 = paddle.zeros_like(x) # [0., 0., 0.]
out2 = paddle.zeros_like(x, dtype='int32') # [0, 0, 0]
......@@ -13,4 +13,3 @@ fluid.transpiler
transpiler_cn/HashName_cn.rst
transpiler_cn/memory_optimize_cn.rst
transpiler_cn/release_memory_cn.rst
transpiler_cn/RoundRobin_cn.rst
.. _cn_api_fluid_transpiler_RoundRobin:
RoundRobin
-------------------------------
.. py:class:: paddle.fluid.transpiler.RoundRobin(pserver_endpoints)
:api_attr: 声明式编程模式(静态图)
该方法使用 ``RoundRobin`` 的方式将变量散列到多个parameter server终端。
`RondRobin <https://en.wikipedia.org/wiki/Round-robin_scheduling>`_
参数:
- **pserver_endpoints** (list) - endpoint (ip:port)的 list
返回:实例化后的RoundRobin的对象
返回类型:RoundRobin
**代码示例**
.. code-block:: python
import paddle.fluid.transpiler.RoundRobin as RoundRobin
pserver_endpoints = [“127.0.0.1:6007”, “127.0.0.1:6008”]
vars = [“var1”,”var2”,”var3”,”var4”,”var5”]
rr = RoundRobin(pserver_endpoints)
rr.dispatch(vars)
.. py:method:: dispatch(varlist)
该方法使用RoundRobin的方式将多个参数散列到多个parameter Server终端。
参数:
- **varlist** (list) - 参数 (var1, var2, var3) 的 list
返回:基于varlist中var的顺序,返回参数服务器(ip:port)的列表, 列表中的数据量和varlist的数据量一致。
返回类型:list
**代码示例**
.. code-block:: python
pserver_endpoints = [“127.0.0.1:6007”, “127.0.0.1:6008”]
vars = [“var1”,”var2”,”var3”,”var4”,”var5”]
rr = RoundRobin(pserver_endpoints)
rr.dispatch(vars)
.. py:method:: reset()
该方法将重置RoundRobin内置的计数, 计数将重置为0。
返回:无。
**代码示例**
.. code-block:: python
pserver_endpoints = [“127.0.0.1:6007”, “127.0.0.1:6008”]
vars = [“var1”,”var2”,”var3”,”var4”,”var5”]
rr = RoundRobin(pserver_endpoints)
rr.reset()
......@@ -4,151 +4,152 @@
TensorFlow-Fluid常用接口对应表
###############################
本文档基于TensorFlow v1.13梳理了常用API与PaddlePaddle API对应关系和差异分析。根据文档对应关系,有TensorFlow使用经验的用户,可根据对应关系,快速熟悉PaddlePaddle的接口使用。
本文档基于TensorFlow v1.15梳理了常用API与PaddlePaddle API对应关系和差异分析。根据文档对应关系,有TensorFlow使用经验的用户,可根据对应关系,快速熟悉PaddlePaddle的接口使用。
.. csv-table::
:header: "序号", "TensorFlow接口", "Fluid接口", "备注"
:widths: 1, 8, 8, 3
"1", "`tf.abs <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/abs>`_", ":ref:`cn_api_fluid_layers_abs`", "功能一致"
"2", "`tf.add <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/add>`_", ":ref:`cn_api_fluid_layers_elementwise_add`", "功能一致"
"3", "`tf.argmax <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/argmax>`_", ":ref:`cn_api_fluid_layers_argmax`", "功能一致"
"4", "`tf.argmin <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/argmin>`_", ":ref:`cn_api_fluid_layers_argmin`", "功能一致"
"5", "`tf.assign <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/assign>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致"
"6", "`tf.assign_add <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/assign_add>`_", ":ref:`cn_api_fluid_layers_increment`", "功能一致"
"7", "`tf.case <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/case>`_", ":ref:`cn_api_fluid_layers_Switch`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.case.md>`_"
"8", "`tf.cast <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/dtypes/cast>`_", ":ref:`cn_api_fluid_layers_cast`", "功能一致"
"9", "`tf.clip_by_global_norm <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/clip_by_global_norm>`_", ":ref:`cn_api_fluid_clip_GradientClipByGlobalNorm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_global_norm.md>`_"
"10", "`tf.clip_by_norm <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/clip_by_norm>`_", ":ref:`cn_api_fluid_layers_clip_by_norm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_norm.md>`_"
"11", "`tf.clip_by_value <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/clip_by_value>`_", ":ref:`cn_api_fluid_layers_clip`", "功能一致"
"12", "`tf.concat <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/concat>`_", ":ref:`cn_api_fluid_layers_concat`", "功能一致"
"13", "`tf.cond <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/cond>`_", ":ref:`cn_api_fluid_layers_ifElse`", "功能一致"
"14", "`tf.constant <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/constant>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致"
"15", "`tf.contrib.layers.batch_norm <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/batch_norm>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"16", "`tf.contrib.layers.flatten <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.contrib.layers.flatten.md>`_"
"17", "`tf.contrib.layers.fully_connected <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/fully_connected>`_", ":ref:`cn_api_fluid_layers_fc`", "功能一致"
"18", "`tf.contrib.layers.one_hot_encoding <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/one_hot_encoding>`_", ":ref:`cn_api_fluid_layers_one_hot`", "功能一致"
"19", "`tf.contrib.layers.softmax <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致"
"20", "`tf.contrib.layers.xavier_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/xavier_initializer>`_", ":ref:`cn_api_fluid_initializer_Xavier`", "功能一致"
"21", "`tf.nn.rnn.GRUCell <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/rnn_cell/GRUCell>`_", ":ref:`cn_api_fluid_layers_gru_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn.GRUCell.md>`_"
"22", "`tf.nn.rnn.MultiRNNCell <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/rnn_cell/MultiRNNCell>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.MultiRNNCell.md>`_"
"23", "`tf.nn.rnn.static_rnn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/static_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "功能一致"
"24", "`tf.convert_to_tensor <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/convert_to_tensor>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致"
"25", "`tf.cos <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/cos>`_", ":ref:`cn_api_fluid_layers_cos`", "功能一致"
"26", "`tf.div <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/div>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"27", "`tf.divide <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/divide>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"28", "`tf.dropout <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dropout.md>`_"
"29", "`tf.equal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/equal>`_", "`运算符== <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"30", "`tf.exp <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/exp>`_", ":ref:`cn_api_fluid_layers_exp`", "功能一致"
"31", "`tf.expand_dims <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/expand_dims>`_", ":ref:`cn_api_fluid_layers_unsqueeze`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.expand_dims.md>`_"
"32", "`tf.fill <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/fill>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致"
"33", "`tf.floor <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/floor>`_", ":ref:`cn_api_fluid_layers_floor`", "功能一致"
"34", "`tf.gather <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/gather>`_", ":ref:`cn_api_fluid_layers_gather`", "功能一致"
"35", "`tf.greater <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/greater>`_", "`运算符> <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"36", "`tf.greater_equal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/greater_equal>`_", "`运算符>= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"37", "`tf.image.non_max_suppression <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/non_max_suppression>`_", ":ref:`cn_api_fluid_layers_multiclass_nms`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.non_max_suppression.md>`_"
"38", "`tf.image.resize_bilinear <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/resize_bilinear>`_", ":ref:`cn_api_fluid_layers_resize_bilinear`", "功能一致"
"39", "`tf.image.resize_images <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/resize_images>`_", ":ref:`cn_api_fluid_layers_image_resize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.resize_images.md>`_"
"40", "`tf.image.resize_nearest_neighbor <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/resize_nearest_neighbor>`_", ":ref:`cn_api_fluid_layers_resize_nearest`", "功能一致"
"41", "`tf.is_finite <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/is_finite>`_", ":ref:`cn_api_fluid_layers_isfinite`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.math.is_finite.md>`_"
"42", "`tf.layers.batch_normalization <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"43", "`tf.layers.conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.conv2d.md>`_"
"44", "`tf.layers.dense <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/dense>`_", ":ref:`cn_api_fluid_layers_fc`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.dense.md>`_"
"45", "`tf.layers.dropout <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致"
"46", "`tf.layers.Dropout <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/Dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致"
"47", "`tf.layers.flatten <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "功能一致"
"48", "`tf.less <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/less>`_", "`运算符< <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"49", "`tf.less_equal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/less_equal>`_", "`运算符<= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"50", "`tf.log <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/log>`_", ":ref:`cn_api_fluid_layers_log`", "功能一致"
"51", "`tf.logical_and <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/logical_and>`_", ":ref:`cn_api_fluid_layers_logical_and`", "功能一致"
"52", "`tf.logical_not <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/logical_not>`_", ":ref:`cn_api_fluid_layers_logical_not`", "功能一致"
"53", "`tf.logical_or <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/logical_or>`_", ":ref:`cn_api_fluid_layers_logical_or`", "功能一致"
"54", "`tf.losses.mean_squared_error <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/losses/mean_squared_error>`_", ":ref:`cn_api_fluid_layers_square_error_cost`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.mean_and_squared_error.md>`_"
"55", "`tf.losses.sigmoid_cross_entropy <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/losses/sigmoid_cross_entropy>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.sigmoid_cross_entropy.md>`_"
"56", "`tf.losses.softmax_cross_entropy <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/losses/softmax_cross_entropy>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "功能一致"
"57", "`tf.matmul <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/linalg/matmul>`_", ":ref:`cn_api_fluid_layers_matmul`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.matmul.md>`_"
"58", "`tf.maximum <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/maximum>`_", ":ref:`cn_api_fluid_layers_elementwise_max`", "功能一致"
"59", "`tf.metrics.accuracy <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/metrics/accuracy>`_", ":ref:`cn_api_fluid_layers_accuracy`", "功能一致"
"60", "`tf.metrics.mean <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/metrics/mean>`_", ":ref:`cn_api_fluid_layers_mean`", "功能一致"
"61", "`tf.minimum <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/minimum>`_", ":ref:`cn_api_fluid_layers_elementwise_min`", "功能一致"
"62", "`tf.multiply <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/multiply>`_", ":ref:`cn_api_fluid_layers_elementwise_mul`", "功能一致"
"63", "`tf.nn.avg_pool <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/avg_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.avg_pool.md>`_"
"64", "`tf.nn.batch_normalization <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"65", "`tf.nn.bidirectional_dynamic_rnn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/bidirectional_dynamic_rnn>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.bidirectional_dynamic_rnn.md>`_"
"66", "`tf.nn.conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d.md>`_"
"67", "`tf.nn.conv2d_transpose <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv2d_transpose>`_", ":ref:`cn_api_fluid_layers_conv2d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d_transpose.md>`_"
"68", "`tf.nn.conv3d_transpose <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv3d_transpose>`_", ":ref:`cn_api_fluid_layers_conv3d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv3d_transpose.md>`_"
"69", "`tf.nn.depthwise_conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/depthwise_conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.depthwise_conv2d.md>`_"
"70", "`tf.nn.dynamic_rnn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/dynamic_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dynamic_rnn.md>`_"
"71", "`tf.nn.l2_normalize <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/l2_normalize>`_", ":ref:`cn_api_fluid_layers_l2_normalize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.l2_normalize.md>`_"
"72", "`tf.nn.leaky_relu <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/leaky_relu>`_", ":ref:`cn_api_fluid_layers_leaky_relu`", "功能一致"
"73", "`tf.nn.lrn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/local_response_normalization>`_", ":ref:`cn_api_fluid_layers_lrn`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.lrn.md>`_"
"74", "`tf.nn.max_pool <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/max_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.max_pool.md>`_"
"75", "`tf.nn.relu <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/relu>`_", ":ref:`cn_api_fluid_layers_relu`", "功能一致"
"76", "`tf.nn.relu6 <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/relu6>`_", ":ref:`cn_api_fluid_layers_relu6`", "功能一致"
"77", "`tf.nn.rnn_cell.LSTMCell <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/rnn_cell/LSTMCell>`_", ":ref:`cn_api_fluid_layers_lstm_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.LSTMCell.md>`_"
"78", "`tf.nn.separable_conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/separable_conv2d>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.separable_conv2d.md>`_"
"79", "`tf.nn.sigmoid <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致"
"80", "`tf.nn.sigmoid_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "功能一致"
"81", "`tf.nn.softmax <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致"
"82", "`tf.nn.softmax_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/softmax_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.softmax_cross_entropy_with_logits.md>`_"
"83", "`tf.nn.softplus <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/softplus>`_", ":ref:`cn_api_fluid_layers_softplus`", "功能一致"
"84", "`tf.nn.softsign <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/softsign>`_", ":ref:`cn_api_fluid_layers_softsign`", "功能一致"
"85", "`tf.nn.tanh <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致"
"86", "`tf.one_hot <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/one_hot>`_", ":ref:`cn_api_fluid_layers_one_hot`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.one_hot.md>`_"
"87", "`tf.ones <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/ones>`_", ":ref:`cn_api_fluid_layers_ones`", "功能一致"
"88", "`tf.intializers.ones <https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/initializers/ones>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致"
"89", "`tf.pad <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/pad>`_", ":ref:`cn_api_fluid_layers_pad`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pad.md>`_"
"90", "`tf.placeholder <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/placeholder>`_", ":ref:`cn_api_fluid_layers_data`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.placeholder.md>`_"
"91", "`tf.pow <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/pow>`_", ":ref:`cn_api_fluid_layers_pow`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pow.md>`_"
"92", "`tf.print <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/print>`_", ":ref:`cn_api_fluid_layers_print`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.print.md>`_"
"93", "`tf.py_func <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/py_func>`_", ":ref:`cn_api_fluid_layers_py_func`", "功能一致"
"94", "`tf.random_normal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/random/normal>`_", ":ref:`cn_api_fluid_layers_gaussian_random`", "功能一致"
"95", "`tf.random_normal_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/initializers/random_normal>`_", ":ref:`cn_api_fluid_initializer_Normal`", "功能一致"
"96", "`tf.random_uniform <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/random/uniform>`_", ":ref:`cn_api_fluid_layers_uniform_random`", "功能一致"
"97", "`tf.random_uniform_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/initializers/random_uniform>`_", ":ref:`cn_api_fluid_initializer_UniformInitializer`", "功能一致"
"98", "`tf.reduce_logsumexp <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_logsumexp>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.reduce_logsumexp.md>`_"
"99", "`tf.reduce_max <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_max>`_", ":ref:`cn_api_fluid_layers_reduce_max`", "功能一致"
"100", "`tf.reduce_mean <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_mean>`_", ":ref:`cn_api_fluid_layers_reduce_mean`", "功能一致"
"101", "`tf.reduce_min <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_min>`_", ":ref:`cn_api_fluid_layers_reduce_min`", "功能一致"
"102", "`tf.reduce_sum <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_sum>`_", ":ref:`cn_api_fluid_layers_reduce_sum`", "功能一致"
"103", "`tf.reshape <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reshape>`_", ":ref:`cn_api_fluid_layers_reshape`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reshape.md>`_"
"104", "`tf.reverse <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致"
"105", "`tf.reverse_sequence <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reverse_sequence>`_", ":ref:`cn_api_fluid_layers_sequence_reverse`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reverse_sequence.md>`_"
"106", "`tf.reverse_v2 <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致"
"107", "`tf.round <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/round>`_", ":ref:`cn_api_fluid_layers_round`", "功能一致"
"108", "`tf.rsqrt <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/rsqrt>`_", ":ref:`cn_api_fluid_layers_rsqrt`", "功能一致"
"109", "`tf.scalar_mul <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/scalar_mul>`_", ":ref:`cn_api_fluid_layers_scale`", "功能一致"
"110", "`tf.scatter_update <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/scatter_update>`_", ":ref:`cn_api_fluid_layers_scatter`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.scatter_update.md>`_"
"111", "`tf.sequence_mask <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/sequence_mask>`_", ":ref:`cn_api_fluid_layers_sequence_mask`", "功能一致"
"112", "`tf.shape <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/shape>`_", ":ref:`cn_api_fluid_layers_shape`", "功能一致"
"113", "`tf.sigmoid <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致"
"114", "`tf.sin <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sin>`_", ":ref:`cn_api_fluid_layers_sin`", "功能一致"
"115", "`tf.slice <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/slice>`_", ":ref:`cn_api_fluid_layers_slice`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.slice.md>`_"
"116", "`tf.split <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/split>`_", ":ref:`cn_api_fluid_layers_split`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.split.md>`_"
"117", "`tf.sqrt <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sqrt>`_", ":ref:`cn_api_fluid_layers_sqrt`", "功能一致"
"118", "`tf.square <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/square>`_", ":ref:`cn_api_fluid_layers_square`", "功能一致"
"119", "`tf.squared_difference <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/squared_difference>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.squared_difference.md>`_"
"120", "`tf.squeeze <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/squeeze>`_", ":ref:`cn_api_fluid_layers_squeeze`", "功能一致"
"121", "`tf.stack <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/stack>`_", ":ref:`cn_api_fluid_layers_stack`", "功能一致"
"122", "`tf.stop_gradient <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/stop_gradient>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.stop_gradient.md>`_"
"123", "`tf.subtract <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/subtract>`_", ":ref:`cn_api_fluid_layers_elementwise_sub`", "功能一致"
"124", "`tf.tanh <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致"
"125", "`tf.tile <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/tile>`_", ":ref:`cn_api_fluid_layers_expand`", "功能一致"
"126", "`tf.top_k <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/top_k>`_", ":ref:`cn_api_fluid_layers_topk`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.top_k.md>`_"
"127", "`tf.train.AdagradOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/AdagradOptimizer>`_", ":ref:`cn_api_fluid_optimizer_AdagradOptimizer`", "功能一致"
"128", "`tf.train.AdamOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/AdamOptimizer>`_", ":ref:`cn_api_fluid_optimizer_Adam`", "功能一致"
"129", "`tf.train.exponential_decay <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/exponential_decay>`_", ":ref:`cn_api_fluid_layers_exponential_decay`", "功能一致"
"130", "`tf.train.GradientDescentOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/GradientDescentOptimizer>`_", ":ref:`cn_api_fluid_optimizer_SGDOptimizer`", "功能一致"
"131", "`tf.train.MomentumOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/MomentumOptimizer>`_", ":ref:`cn_api_fluid_optimizer_MomentumOptimizer`", "功能一致"
"132", "`tf.train.polynomial_decay <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/polynomial_decay>`_", ":ref:`cn_api_fluid_layers_polynomial_decay`", "功能一致"
"133", "`tf.train.RMSPropOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/RMSPropOptimizer>`_", ":ref:`cn_api_fluid_optimizer_RMSPropOptimizer`", "功能一致"
"134", "`tf.transpose <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/transpose>`_", ":ref:`cn_api_fluid_layers_transpose`", "功能一致"
"135", "`tf.truediv <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/truediv>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"136", "`tf.truncated_normal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/random/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致"
"137", "`tf.truncated_normal_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/initializers/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致"
"138", "`tf.unstack <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/unstack>`_", ":ref:`cn_api_fluid_layers_unstack`", "功能一致"
"139", "`tf.Variable <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/Variable>`_", ":ref:`cn_api_fluid_layers_create_parameter`", "功能一致"
"140", "`tf.while_loop <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/while_loop>`_", ":ref:`cn_api_fluid_layers_While`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.while_loop.md>`_"
"141", "`tf.zeros <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/zeros>`_", ":ref:`cn_api_fluid_layers_zeros`", "功能一致"
"142", "`tf.zeros_initializer <https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/zeros_initializer>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致"
"1", "`tf.abs <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/abs>`_", ":ref:`cn_api_fluid_layers_abs`", "功能一致"
"2", "`tf.add <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/add>`_", ":ref:`cn_api_fluid_layers_elementwise_add`", "功能一致"
"3", "`tf.argmax <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/argmax>`_", ":ref:`cn_api_fluid_layers_argmax`", "功能一致"
"4", "`tf.argmin <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/argmin>`_", ":ref:`cn_api_fluid_layers_argmin`", "功能一致"
"5", "`tf.assign <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/assign>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致"
"6", "`tf.assign_add <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/assign_add>`_", ":ref:`cn_api_fluid_layers_increment`", "功能一致"
"7", "`tf.case <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/case>`_", ":ref:`cn_api_fluid_layers_Switch`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.case.md>`_"
"8", "`tf.cast <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/dtypes/cast>`_", ":ref:`cn_api_fluid_layers_cast`", "功能一致"
"9", "`tf.clip_by_global_norm <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_global_norm>`_", ":ref:`cn_api_fluid_clip_GradientClipByGlobalNorm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_global_norm.md>`_"
"10", "`tf.clip_by_norm <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_norm>`_", ":ref:`cn_api_fluid_layers_clip_by_norm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_norm.md>`_"
"11", "`tf.clip_by_value <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_value>`_", ":ref:`cn_api_fluid_layers_clip`", "功能一致"
"12", "`tf.concat <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/concat>`_", ":ref:`cn_api_fluid_layers_concat`", "功能一致"
"13", "`tf.cond <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/cond>`_", ":ref:`cn_api_fluid_layers_ifElse`", "功能一致"
"14", "`tf.constant <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/constant>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致"
"15", "`tf.contrib.layers.batch_norm <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/batch_norm>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"16", "`tf.contrib.layers.flatten <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.contrib.layers.flatten.md>`_"
"17", "`tf.contrib.layers.fully_connected <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/fully_connected>`_", ":ref:`cn_api_fluid_layers_fc`", "功能一致"
"18", "`tf.contrib.layers.one_hot_encoding <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/one_hot_encoding>`_", ":ref:`cn_api_fluid_layers_one_hot`", "功能一致"
"19", "`tf.contrib.layers.softmax <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致"
"20", "`tf.contrib.layers.xavier_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/xavier_initializer>`_", ":ref:`cn_api_fluid_initializer_Xavier`", "功能一致"
"21", "`tf.nn.rnn.GRUCell <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/rnn_cell/GRUCell>`_", ":ref:`cn_api_fluid_layers_gru_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn.GRUCell.md>`_"
"22", "`tf.nn.rnn.MultiRNNCell <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/rnn_cell/MultiRNNCell>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.MultiRNNCell.md>`_"
"23", "`tf.nn.rnn.static_rnn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/static_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "功能一致"
"24", "`tf.convert_to_tensor <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/convert_to_tensor>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致"
"25", "`tf.cos <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/cos>`_", ":ref:`cn_api_fluid_layers_cos`", "功能一致"
"26", "`tf.div <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/div>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"27", "`tf.divide <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/divide>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"28", "`tf.dropout <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dropout.md>`_"
"29", "`tf.equal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/equal>`_", "`运算符== <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"30", "`tf.exp <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/exp>`_", ":ref:`cn_api_fluid_layers_exp`", "功能一致"
"31", "`tf.expand_dims <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/expand_dims>`_", ":ref:`cn_api_fluid_layers_unsqueeze`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.expand_dims.md>`_"
"32", "`tf.fill <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/fill>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致"
"33", "`tf.floor <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/floor>`_", ":ref:`cn_api_fluid_layers_floor`", "功能一致"
"34", "`tf.gather <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/gather>`_", ":ref:`cn_api_fluid_layers_gather`", "功能一致"
"35", "`tf.greater <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/greater>`_", "`运算符> <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"36", "`tf.greater_equal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/greater_equal>`_", "`运算符>= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"37", "`tf.image.non_max_suppression <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/non_max_suppression>`_", ":ref:`cn_api_fluid_layers_multiclass_nms`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.non_max_suppression.md>`_"
"38", "`tf.image.resize_bilinear <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/resize_bilinear>`_", ":ref:`cn_api_fluid_layers_resize_bilinear`", "功能一致"
"39", "`tf.image.resize_images <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/resize_images>`_", ":ref:`cn_api_fluid_layers_image_resize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.resize_images.md>`_"
"40", "`tf.image.resize_nearest_neighbor <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/resize_nearest_neighbor>`_", ":ref:`cn_api_fluid_layers_resize_nearest`", "功能一致"
"41", "`tf.is_finite <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/is_finite>`_", ":ref:`cn_api_fluid_layers_isfinite`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.math.is_finite.md>`_"
"42", "`tf.layers.batch_normalization <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"43", "`tf.layers.conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.conv2d.md>`_"
"44", "`tf.layers.dense <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/dense>`_", ":ref:`cn_api_fluid_layers_fc`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.dense.md>`_"
"45", "`tf.layers.dropout <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致"
"46", "`tf.layers.Dropout <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/Dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致"
"47", "`tf.layers.flatten <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "功能一致"
"48", "`tf.less <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/less>`_", "`运算符< <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"49", "`tf.less_equal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/less_equal>`_", "`运算符<= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"50", "`tf.log <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/log>`_", ":ref:`cn_api_fluid_layers_log`", "功能一致"
"51", "`tf.logical_and <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/logical_and>`_", ":ref:`cn_api_fluid_layers_logical_and`", "功能一致"
"52", "`tf.logical_not <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/logical_not>`_", ":ref:`cn_api_fluid_layers_logical_not`", "功能一致"
"53", "`tf.logical_or <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/logical_or>`_", ":ref:`cn_api_fluid_layers_logical_or`", "功能一致"
"54", "`tf.losses.mean_squared_error <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/losses/mean_squared_error>`_", ":ref:`cn_api_fluid_layers_square_error_cost`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.mean_and_squared_error.md>`_"
"55", "`tf.losses.sigmoid_cross_entropy <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/losses/sigmoid_cross_entropy>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.sigmoid_cross_entropy.md>`_"
"56", "`tf.losses.softmax_cross_entropy <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/losses/softmax_cross_entropy>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "功能一致"
"57", "`tf.matmul <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/linalg/matmul>`_", ":ref:`cn_api_fluid_layers_matmul`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.matmul.md>`_"
"58", "`tf.maximum <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/maximum>`_", ":ref:`cn_api_fluid_layers_elementwise_max`", "功能一致"
"59", "`tf.metrics.accuracy <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/metrics/accuracy>`_", ":ref:`cn_api_fluid_layers_accuracy`", "功能一致"
"60", "`tf.metrics.mean <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/metrics/mean>`_", ":ref:`cn_api_fluid_layers_mean`", "功能一致"
"61", "`tf.minimum <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/minimum>`_", ":ref:`cn_api_fluid_layers_elementwise_min`", "功能一致"
"62", "`tf.multiply <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/multiply>`_", ":ref:`cn_api_fluid_layers_elementwise_mul`", "功能一致"
"63", "`tf.nn.avg_pool <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/avg_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.avg_pool.md>`_"
"64", "`tf.nn.batch_normalization <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"65", "`tf.nn.bidirectional_dynamic_rnn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/bidirectional_dynamic_rnn>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.bidirectional_dynamic_rnn.md>`_"
"66", "`tf.nn.conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d.md>`_"
"67", "`tf.nn.conv2d_transpose <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/conv2d_transpose>`_", ":ref:`cn_api_fluid_layers_conv2d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d_transpose.md>`_"
"68", "`tf.nn.conv3d_transpose <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/conv3d_transpose>`_", ":ref:`cn_api_fluid_layers_conv3d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv3d_transpose.md>`_"
"69", "`tf.nn.depthwise_conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/depthwise_conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.depthwise_conv2d.md>`_"
"70", "`tf.nn.dynamic_rnn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/dynamic_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dynamic_rnn.md>`_"
"71", "`tf.nn.l2_normalize <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/l2_normalize>`_", ":ref:`cn_api_fluid_layers_l2_normalize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.l2_normalize.md>`_"
"72", "`tf.nn.leaky_relu <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/leaky_relu>`_", ":ref:`cn_api_fluid_layers_leaky_relu`", "功能一致"
"73", "`tf.nn.lrn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/local_response_normalization>`_", ":ref:`cn_api_fluid_layers_lrn`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.lrn.md>`_"
"74", "`tf.nn.max_pool <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/max_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.max_pool.md>`_"
"75", "`tf.nn.relu <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/relu>`_", ":ref:`cn_api_fluid_layers_relu`", "功能一致"
"76", "`tf.nn.relu6 <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/relu6>`_", ":ref:`cn_api_fluid_layers_relu6`", "功能一致"
"77", "`tf.nn.rnn_cell.LSTMCell <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/rnn_cell/LSTMCell>`_", ":ref:`cn_api_fluid_layers_lstm_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.LSTMCell.md>`_"
"78", "`tf.nn.separable_conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/separable_conv2d>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.separable_conv2d.md>`_"
"79", "`tf.nn.sigmoid <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致"
"80", "`tf.nn.sigmoid_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "功能一致"
"81", "`tf.nn.softmax <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致"
"82", "`tf.nn.softmax_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/softmax_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.softmax_cross_entropy_with_logits.md>`_"
"83", "`tf.nn.softplus <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/softplus>`_", ":ref:`cn_api_fluid_layers_softplus`", "功能一致"
"84", "`tf.nn.softsign <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/softsign>`_", ":ref:`cn_api_fluid_layers_softsign`", "功能一致"
"85", "`tf.nn.tanh <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致"
"86", "`tf.one_hot <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/one_hot>`_", ":ref:`cn_api_fluid_layers_one_hot`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.one_hot.md>`_"
"87", "`tf.ones <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/ones>`_", ":ref:`cn_api_fluid_layers_ones`", "功能一致"
"88", "`tf.intializers.ones <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/ones>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致"
"89", "`tf.pad <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/pad>`_", ":ref:`cn_api_fluid_layers_pad`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pad.md>`_"
"90", "`tf.placeholder <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/placeholder>`_", ":ref:`cn_api_fluid_layers_data`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.placeholder.md>`_"
"91", "`tf.pow <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/pow>`_", ":ref:`cn_api_fluid_layers_pow`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pow.md>`_"
"92", "`tf.print <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/print>`_", ":ref:`cn_api_fluid_layers_print`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.print.md>`_"
"93", "`tf.py_func <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/py_func>`_", ":ref:`cn_api_fluid_layers_py_func`", "功能一致"
"94", "`tf.random_normal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/random/normal>`_", ":ref:`cn_api_fluid_layers_gaussian_random`", "功能一致"
"95", "`tf.random_normal_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/random_normal>`_", ":ref:`cn_api_fluid_initializer_Normal`", "功能一致"
"96", "`tf.random_uniform <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/random/uniform>`_", ":ref:`cn_api_fluid_layers_uniform_random`", "功能一致"
"97", "`tf.random_uniform_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/random_uniform>`_", ":ref:`cn_api_fluid_initializer_UniformInitializer`", "功能一致"
"98", "`tf.reduce_logsumexp <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_logsumexp>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.reduce_logsumexp.md>`_"
"99", "`tf.reduce_max <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_max>`_", ":ref:`cn_api_fluid_layers_reduce_max`", "功能一致"
"100", "`tf.reduce_mean <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_mean>`_", ":ref:`cn_api_fluid_layers_reduce_mean`", "功能一致"
"101", "`tf.reduce_min <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_min>`_", ":ref:`cn_api_fluid_layers_reduce_min`", "功能一致"
"102", "`tf.reduce_sum <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_sum>`_", ":ref:`cn_api_fluid_layers_reduce_sum`", "功能一致"
"103", "`tf.reshape <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reshape>`_", ":ref:`cn_api_fluid_layers_reshape`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reshape.md>`_"
"104", "`tf.reverse <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致"
"105", "`tf.reverse_sequence <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reverse_sequence>`_", ":ref:`cn_api_fluid_layers_sequence_reverse`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reverse_sequence.md>`_"
"106", "`tf.reverse_v2 <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致"
"107", "`tf.round <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/round>`_", ":ref:`cn_api_fluid_layers_round`", "功能一致"
"108", "`tf.rsqrt <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/rsqrt>`_", ":ref:`cn_api_fluid_layers_rsqrt`", "功能一致"
"109", "`tf.scalar_mul <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/scalar_mul>`_", ":ref:`cn_api_fluid_layers_scale`", "功能一致"
"110", "`tf.scatter_update <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/scatter_update>`_", ":ref:`cn_api_fluid_layers_scatter`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.scatter_update.md>`_"
"111", "`tf.sequence_mask <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/sequence_mask>`_", ":ref:`cn_api_fluid_layers_sequence_mask`", "功能一致"
"112", "`tf.shape <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/shape>`_", ":ref:`cn_api_fluid_layers_shape`", "功能一致"
"113", "`tf.sigmoid <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致"
"114", "`tf.sin <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sin>`_", ":ref:`cn_api_fluid_layers_sin`", "功能一致"
"115", "`tf.slice <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/slice>`_", ":ref:`cn_api_fluid_layers_slice`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.slice.md>`_"
"116", "`tf.split <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/split>`_", ":ref:`cn_api_fluid_layers_split`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.split.md>`_"
"117", "`tf.sqrt <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sqrt>`_", ":ref:`cn_api_fluid_layers_sqrt`", "功能一致"
"118", "`tf.square <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/square>`_", ":ref:`cn_api_fluid_layers_square`", "功能一致"
"119", "`tf.squared_difference <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/squared_difference>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.squared_difference.md>`_"
"120", "`tf.squeeze <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/squeeze>`_", ":ref:`cn_api_fluid_layers_squeeze`", "功能一致"
"121", "`tf.stack <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/stack>`_", ":ref:`cn_api_fluid_layers_stack`", "功能一致"
"122", "`tf.stop_gradient <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/stop_gradient>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.stop_gradient.md>`_"
"123", "`tf.subtract <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/subtract>`_", ":ref:`cn_api_fluid_layers_elementwise_sub`", "功能一致"
"124", "`tf.tanh <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致"
"125", "`tf.tile <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/tile>`_", ":ref:`cn_api_fluid_layers_expand`", "功能一致"
"126", "`tf.top_k <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/top_k>`_", ":ref:`cn_api_fluid_layers_topk`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.top_k.md>`_"
"127", "`tf.train.AdagradOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/AdagradOptimizer>`_", ":ref:`cn_api_fluid_optimizer_AdagradOptimizer`", "功能一致"
"128", "`tf.train.AdamOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/AdamOptimizer>`_", ":ref:`cn_api_fluid_optimizer_Adam`", "功能一致"
"129", "`tf.train.exponential_decay <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/exponential_decay>`_", ":ref:`cn_api_fluid_layers_exponential_decay`", "功能一致"
"130", "`tf.train.GradientDescentOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/GradientDescentOptimizer>`_", ":ref:`cn_api_fluid_optimizer_SGDOptimizer`", "功能一致"
"131", "`tf.train.MomentumOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/MomentumOptimizer>`_", ":ref:`cn_api_fluid_optimizer_MomentumOptimizer`", "功能一致"
"132", "`tf.train.polynomial_decay <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/polynomial_decay>`_", ":ref:`cn_api_fluid_layers_polynomial_decay`", "功能一致"
"133", "`tf.train.RMSPropOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/RMSPropOptimizer>`_", ":ref:`cn_api_fluid_optimizer_RMSPropOptimizer`", "功能一致"
"134", "`tf.transpose <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/transpose>`_", ":ref:`cn_api_fluid_layers_transpose`", "功能一致"
"135", "`tf.truediv <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/truediv>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"136", "`tf.truncated_normal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/random/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致"
"137", "`tf.truncated_normal_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致"
"138", "`tf.unstack <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/unstack>`_", ":ref:`cn_api_fluid_layers_unstack`", "功能一致"
"139", "`tf.Variable <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Variable>`_", ":ref:`cn_api_fluid_layers_create_parameter`", "功能一致"
"140", "`tf.while_loop <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/while_loop>`_", ":ref:`cn_api_fluid_layers_While`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.while_loop.md>`_"
"141", "`tf.zeros <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/zeros>`_", ":ref:`cn_api_fluid_layers_zeros`", "功能一致"
"142", "`tf.zeros_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/zeros_initializer>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致"
.. _cn_user_guide_broadcasting:
==================
广播 (broadcasting)
==================
飞桨(PaddlePaddle,以下简称Paddle)和其他框架一样,提供的一些API支持广播(broadcasting)机制,允许在一些运算时使用不同形状的张量。
通常来讲,如果有一个形状较小和一个形状较大的张量,我们希望多次使用较小的张量来对较大的张量执行一些操作,看起来像是较小形状的张量的形状首先被扩展到和较大形状的张量一致,然后做运算。
值得注意的是,这期间并没有对较小形状张量的数据拷贝操作。
飞桨的广播机制主要遵循如下规则;如果两个张量的形状遵循一下规则,我们认为这两个张量是可广播的(参考`Numpy 广播机制 <https://numpy.org/doc/stable/user/basics.broadcasting.html#module-numpy.doc.broadcasting>`):
1. 每个张量至少为一维张量
2. 从后往前比较张量的形状,当前维度的大小要么相等,要么其中一个等于一,要么其中一个不存在
例如:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.ones((2,3,4), np.float32))
y = paddle.imperative.to_variable(np.ones((2,3,4), np.float32))
# 两个张量 形状一致,可以广播
x = paddle.imperative.to_variable(np.ones((2,3,1,5), np.float32))
y = paddle.imperative.to_variable(np.ones((3,4,1), np.float32))
# 从后向前依次比较:
# 第一次:y的维度大小是1
# 第二次:x的维度大小是1
# 第三次:x和y的维度大小相等
# 第四次:y的维度不存在
# 所以 x和y是可以广播的
# 相反
x = paddle.imperative.to_variable(np.ones((2,3,4), np.float32))
y = paddle.imperative.to_variable(np.ones((2,3,6), np.float32))
# 此时x和y是不可广播的,因为第一次比较 4不等于6
现在我们知道什么情况下两个张量是可以广播的,两个张量进行广播语义后的结果张量的形状计算规则如下:
1. 如果两个张量的形状的长度不一致,那么需要在较小形状长度的矩阵像前添加1,只到两个张量的形状长度相等。
2. 保证两个张量形状相等之后,每个维度上的结果维度就是当前维度上较大的那个。
例如:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.ones((2,1,4), np.float32))
y = paddle.imperative.to_variable(np.ones((3,1), np.float32))
z = x+y
print(z.shape)
# z的形状: [2,3,4]
x = paddle.imperative.to_variable(np.ones((2,1,4), np.float32))
y = paddle.imperative.to_variable(np.ones((3,2), np.float32))
z = x+y
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
除此之外,飞桨的elementwise系列API针对广播机制增加了axis参数,当使用较小形状的y来来匹配较大形状的x的时候,且满足y的形状的长度小于x的形状长度,
axis表示y在x上应用广播机制的时候的起始维度的位置,当设置了asis参数后,张量的维度比较顺序变成了从axis开始,从前向后比较。当axis=-1时,axis = rank(x) - rank(y),
同时y的大小为1的尾部维度将被忽略。
例如:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.ones((2,1,4), np.float32))
y = paddle.imperative.to_variable(np.ones((3,1), np.float32))
z = paddle.elementwise_add(x,y,axis=1)
# z的形状 [2, 3, 4]
x = paddle.imperative.to_variable(np.ones((2,3,4,5), np.float32))
y = paddle.imperative.to_variable(np.ones((4,5), np.float32))
z = paddle.elementwise_add(x,y,axis=1)
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
# 因为指定了axis之后,计算广播的维度从axis开始从前向后比较
x = paddle.imperative.to_variable(np.ones((2,3,4,5), np.float32))
y = paddle.imperative.to_variable(np.ones((3), np.float32))
z = paddle.elementwise_add(x,y,axis=1)
print(z.shape)
# z的形状 [2, 3, 4, 5]
# 因为此时是从axis=1的维度开始,从前向后比较维度进行广播
.. _user_guide_broadcasting
==================
Broadcasting
==================
PaddlePaddle provides broadcasting semantics in some APIs like other deep learning frameworks, which allows using tensors with different shapes while operating.
In General, broadcast is the rule how the smaller tensor is “broadcast” across the larger tsnsor so that they have same shapes.
Note that no copies happened while broadcasting.
In Paddlepaddle, tensors are broadcastable when following rulrs hold(ref: Numpy Broadcasting <https://numpy.org/doc/stable/user/basics.broadcasting.html#module-numpy.doc.broadcasting>`):
1. there should be at least one dimention in each tensor
2. when we compare their shapes element-wise from backward to forward, two dimensions are compatible when
they are equal, or one of them is 1, or one of them does not exist.
For example:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.ones((2,3,4), np.float32))
y = paddle.imperative.to_variable(np.ones((2,3,4), np.float32))
# Two tensor have some shpes are broadcastable
x = paddle.imperative.to_variable(np.ones((2,3,1,5), np.float32))
y = paddle.imperative.to_variable(np.ones((3,4,1), np.float32))
# compare from backward to forward:
# 1st step:y's dimention is 1
# 2nd step:x's dimention is 1
# 3rd step:two dimentions are the same
# 4st step:y's dimention does not exist
# So, x and y are broadcastable
# In Compare
x = paddle.imperative.to_variable(np.ones((2,3,4), np.float32))
y = paddle.imperative.to_variable(np.ones((2,3,6), np.float32))
# x and y are not broadcastable because in first step form tail, x's dimention 4 is not equal to y's dimention 6
Now we know in what condition two tensors are broadcastable, how to calculate the resulting tensor's size follows the rules:
1. If the number of dimensions of x and y are not equal, prepend 1 to the dimensions of the tensor with fewer dimensions to make them equal length.
2. Then, for each dimension size, the resulting dimension size is the max of the sizes of x and y along that dimension.
For example:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.ones((2,1,4), np.float32))
y = paddle.imperative.to_variable(np.ones((3,1), np.float32))
z = x+y
print(z.shape)
# z'shape: [2,3,4]
x = paddle.imperative.to_variable(np.ones((2,1,4), np.float32))
y = paddle.imperative.to_variable(np.ones((3,2), np.float32))
z = x+y
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
In addition, axis is introduced to PaddlePaddle's broadcasting semantics. when using smaller shape tensor y to broadcast a larger tensor x,
and y's length of dimentions is smaller than x's, we can specify a aixs to indicate the starting dimention to do broadcasting.
In this case, the comparation on dimentions runs from forward to backward started at axis. when axis=-1, axis = rank(x) - rank(y).
when the last dimention of y is 1, it will be ignored.
For example:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.ones((2,1,4), np.float32))
y = paddle.imperative.to_variable(np.ones((3,1), np.float32))
z = paddle.elementwise_add(x,y,axis=1)
# z'shape [2, 3, 4]
x = paddle.imperative.to_variable(np.ones((2,3,4,5), np.float32))
y = paddle.imperative.to_variable(np.ones((4,5), np.float32))
z = paddle.elementwise_add(x,y,axis=1)
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
# axis is indicated, comparation between dimentions starts at axis.
x = paddle.imperative.to_variable(np.ones((2,3,4,5), np.float32))
y = paddle.imperative.to_variable(np.ones((3), np.float32))
z = paddle.elementwise_add(x,y,axis=1)
print(z.shape)
# z'shape [2, 3, 4, 5]
# Start comparation at axis=1 from forward to backward.
......@@ -11,7 +11,7 @@
- `Operator <operator.html>`_ : Operator表示对数据的操作。
- `Program <program.html>`_ : Program表示对计算过程的描述。
- `Executor <executor.html>`_ : Executor表示执行引擎。
- `Broadcasting <broadcasting.html>`_ : Paddle对广播支持的说明。
.. toctree::
:hidden:
......@@ -22,4 +22,4 @@
operator.rst
program.rst
executor.rst
broadcasting.rst
......@@ -6,13 +6,13 @@ This paper introduces the basic concepts of Paddle:
- `Guide to Fluid Programming <./programming_guide/programming_guide_en.html>`_ :introduces the basic concept and usage of Paddle.
- `LoD-Tensor User Guide <lod_tensor_en.html>`_ : LoD-Tensor is a high-level feature of Paddle. It adds sequence information on the basis of tensor and supports processing variable length data.
- `Broadcasting <broadcasting_en.html>`_ : introduces Paddle provides broadcasting semantics.
.. toctree::
:hidden:
programming_guide/programming_guide_en.md
lod_tensor_en.rst
broadcasting_en.rst
.. _cn_user_guide_Operator:
=======
=========
Operator
=======
=========
在飞桨(PaddlePaddle,以下简称Paddle)中,所有对数据的操作都由Operator表示
......
***
<a name="third_party"></a>
# Appendix
## Compile Dependency Table
<p align="center">
<table>
<thead>
<tr>
<th> Dependency package name </th>
<th> Version </th>
<th> Description </th>
<th> Installation command </th>
</tr>
</thead>
<tbody>
<tr>
<td> CMake </td>
<td> 3.4 </td>
<td> </td>
<td> </td>
</tr>
<tr>
<td> GCC </td>
<td> 4.8 / 5.4 </td>
<td> recommends using devtools2 for CentOS </td>
<td> </td>
</tr>
<tr>
<td> Python </td>
<td> 2.7.x. </td>
<td> depends on libpython2.7.so </td>
<td> <code> apt install python-dev </code> or <code> yum install python-devel </code></td>
</tr>
<tr>
<td> SWIG </td>
<td> at least 2.0 </td>
<td> </td>
<td> <code>apt install swig </code> or <code> yum install swig </code> </td>
</tr>
<tr>
<td> wget </td>
<td> any </td>
<td> </td>
<td> <code> apt install wget </code> or <code> yum install wget </code> </td>
</tr>
<tr>
<td> openblas </td>
<td> any </td>
<td> </td>
<td> </td>
</tr>
<tr>
<td> pip </td>
<td> at least 9.0.1 </td>
<td> </td>
<td> <code> apt install python-pip </code> or <code> yum install Python-pip </code> </td>
</tr>
<tr>
<td> numpy </td>
<td> >=1.12.0 </td>
<td> </td>
<td> <code> pip install numpy==1.14.0 </code> </td>
</tr>
<tr>
<td> protobuf </td>
<td> 3.1.0 </td>
<td> </td>
<td> <code> pip install protobuf==3.1.0 </code> </td>
</tr>
<tr>
<td> wheel </td>
<td> any </td>
<td> </td>
<td> <code> pip install wheel </code> </td>
</tr>
<tr>
<td> patchELF </td>
<td> any </td>
<td> </td>
<td> <code> apt install patchelf </code> or read github <a href="https://gist.github.com/ruario/80fefd174b3395d34c14">patchELF official documentation</a></td>
</tr>
<tr>
<td> go </td>
<td> >=1.8 </td>
<td> optional </td>
<td> </td>
</tr>
</tbody>
</table>
</p>
***
<a name="Compile"></a>
</br></br>
## **Compile Option Table**
<p align="center">
<table>
<thead>
<tr>
<th> Option </th>
<th> Description </th>
<th> Default </th>
</tr>
</thead>
<tbody>
<tr>
<td> WITH_GPU </td>
<td> Whether to support GPU </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_C_API </td>
<td> Whether to compile CAPI </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_DOUBLE </td>
<td> Whether to use double precision floating point numeber </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_DSO </td>
<td> whether to load CUDA dynamic libraries dynamically at runtime, instead of statically loading CUDA dynamic libraries. </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_AVX </td>
<td> whether to compile PaddlePaddle binaries file containing the AVX instruction set </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_PYTHON </td>
<td> Whether the PYTHON interpreter is embedded </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_STYLE_CHECK </td>
<td> Whether to perform code style checking at compile time </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_TESTING </td>
<td> Whether to turn on unit test </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_DOC </td>
<td> Whether to compile Chinese and English documents </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_SWIG_PY </td>
<td> Whether to compile PYTHON's SWIG interface, which can be used for predicting and customizing training </td>
<td> Auto </td>
<tr>
<td> WITH_GOLANG </td>
<td> Whether to compile the fault-tolerant parameter server of the go language </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_MKL </td>
<td> Whether to use the MKL math library, if not,using OpenBLAS </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_SYSTEM_BLAS </td>
<td> Whether to use the system's BLAS </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_DISTRIBUTE </td>
<td> Whether to Compile with distributed version </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_RDMA </td>
<td> Whether to compile the relevant parts that supports RDMA </td>
<td> OFF </td>
</tr>
<tr>
<td> WITH_BRPC_RDMA </td>
<td> Whether to use BRPC RDMA as RPC protocol </td>
<td> OFF </td>
</tr>
<tr>
<td> ON_INFER </td>
<td> Whether to turn on prediction optimization </td>
<td> OFF </td>
</tr>
<tr>
<td> DWITH_ANAKIN </td>
<td> Whether to Compile ANAKIN </td>
<td> OFF </td>
</tr>
<tr>
<td> CUDA_ARCH_NAME </td>
<td> Build for which GPU architecture </td>
<td> All:all available GPU architectures Auto:Automatically detect current GPU architecture </td>
</tr>
<tr>
<td> TENSORRT_ROOT </td>
<td> Assign TensoRRT path </td>
<td> If this flag is not assigned, Paddle will detect TensorRT automatically. </td>
</tr>
</tbody>
</table>
</p>
**BLAS**
PaddlePaddle supports two BLAS libraries, [MKL](https://software.intel.com/en-us/mkl) and [OpenBlAS](http://www.openblas.net/). MKL is used by default. If you use MKL and the machine contains the AVX2 instruction set, you will also download the MKL-DNN math library, for details please refer to [here](https://github.com/PaddlePaddle/Paddle/tree/release/0.11.0/doc/design/mkldnn#cmake).
If you close MKL, OpenBLAS will be used as the BLAS library.
**CUDA/cuDNN**
PaddlePaddle automatically finds the CUDA and cuDNN libraries installed in the system for compilation and execution at compile time/runtime. Use the parameter `-DCUDA_ARCH_NAME=Auto` to specify to enable automatic detection of the SM architecture and speed up compilation.
PaddlePaddle can be compiled and run using any version after cuDNN v5.1, but try to keep the same version of cuDNN in the compiling and running processes. We recommend using the latest version of cuDNN.
**Configure Compile Options**
PaddePaddle implements references to various BLAS/CUDA/cuDNN libraries by specifying paths at compile time. When cmake compiles, it first searches the system paths ( `/usr/liby` and `/usr/local/lib` ) for these libraries, and also reads the relevant path variables for searching. Can be set by using the `-D` command, for example:
> `Cmake .. -DWITH_GPU=ON -DWITH_TESTING=OFF -DCUDNN_ROOT=/opt/cudnnv5`
**Note**: The settings introduced here for these compilation options are only valid for the first cmake. If you want to reset it later, it is recommended to clean up the entire build directory ( rm -rf ) and then specify it.
***
<a name="whls"></a>
</br></br>
## **Installation Package List**
<p align="center">
<table>
<thead>
<tr>
<th> Version Number </th>
<th> Release Discription </th>
</tr>
</thead>
<tbody>
<tr>
<td> paddlepaddle==[version code] such as paddlepaddle==1.5.1 </td>
<td> Only support the corresponding version of the CPU PaddlePaddle, please refer to <a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> for the specific version. </td>
</tr>
<tr>
<td> paddlepaddle-gpu==1.5.1 </td>
<td> Using version 1.5.1 compiled with CUDA 9.0 and cuDNN 7 </td>
</tr>
<tr>
<td> paddlepaddle-gpu==1.5.1.post87 </td>
<td> Using version 1.5.1 compiled with CUDA 8.0 and cuDNN 7 </td>
</tr>
</tbody>
</table>
</p>
You can find various distributions of PaddlePaddle-gpu in [the Release History](https://pypi.org/project/paddlepaddle-gpu/#history).
Please note that: paddlepaddle-gpu in windows, will download package compiled with CUDA 8.0 and cuDNN 7
***
<a name="dockers"></a>
</br></br>
## Installation Mirrors and Introduction
<p align="center">
<table>
<thead>
<tr>
<th> Version Number </th>
<th> Release Description </th>
</tr>
</thead>
<tbody>
<tr>
<td> hub.baidubce.com/paddlepaddle/paddle:latest </td>
<td> The latest pre-installed image of the PaddlePaddle CPU version </td>
</tr>
<tr>
<td> hub.baidubce.com/paddlepaddle/paddle:latest-dev </td>
<td> The latest PaddlePaddle development environment </td>
</tr>
<tr>
<td> hub.baidubce.com/paddlepaddle/paddle:[Version] </td>
<td> Replace version with a specific version, preinstalled PaddlePaddle image in historical version </td>
</tr>
<tr>
<td> hub.baidubce.com/paddlepaddle/paddle:latest-gpu </td>
<td> The latest pre-installed image of the PaddlePaddle GPU version </td>
</tr>
</tbody>
</table>
</p>
You can find the docker image for each release of PaddlePaddle in the [DockerHub](https://hub.docker.com/r/paddlepaddle/paddle/tags/).
***
<a name="ciwhls-release"></a>
</br></br>
## **Multi-version whl package list - Release**
<p align="center">
<table>
<thead>
<tr>
<th> Release Instruction </th>
<th> cp27-cp27mu </th>
<th> cp27-cp27m </th>
<th> cp35-cp35m </th>
<th> cp36-cp36m </th>
<th> cp37-cp37m </th>
</tr>
</thead>
<tbody>
<tr>
<td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> win_cpu_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda8_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda9_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> mac_cpu </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp37-cp37m-macosx_10_6_intel.whl</a></td>
</tr>
</tbody>
</table>
</p>
<a name="ciwhls"></a>
</br></br>
## **Multi-version whl package list - dev**
<p align="center">
<table>
<thead>
<tr>
<th> Release Instruction </th>
<th> cp27-cp27mu </th>
<th> cp27-cp27m </th>
<th> cp35-cp35m </th>
<th> cp36-cp36m </th>
<th> cp37-cp37m </th>
</tr>
</thead>
<tbody>
<tr>
<td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-mkl/paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-mkl/paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-mkl/paddlepaddle-latest-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-latest-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-mkl/paddlepaddle-latest-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-latest-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-mkl/paddlepaddle-latest-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-latest-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-openblas/paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-openblas/paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-openblas/paddlepaddle-latest-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-latest-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-openblas/paddlepaddle-latest-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-latest-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-cpu-openblas/paddlepaddle-latest-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-latest-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-latest-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-latest-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/latest-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-latest-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
</tbody>
</table>
</p>
</br></br>
## Execute the PaddlePaddle training program in Docker
***
Suppose you have written a PaddlePaddle program in the current directory (such as /home/work): `train.py` ( refer to [PaddlePaddleBook](https://github.com/PaddlePaddle/book/blob/develop/01.fit_a_line/README.cn.md) to write), you can start the training with the following command:
cd /home/work
docker run -it -v $PWD:/work hub.baidubce.com/paddlepaddle/paddle /work/train.py
In the above commands, the `-it` parameter indicates that the container has been run interactively; `-v $PWD:/work` specifies that the current path (the absolute path where the PWD variable in Linux will expand to the current path) is mounted to the `:/work` directory inside the container: `Hub.baidubce.com/paddlepaddle/paddle` specifies the container to be used; finally `/work/train.py` is the command executed inside the container, ie. the training program.
Of course, you can also enter into the Docker container and execute or debug your code interactively:
docker run -it -v $PWD:/work hub.baidubce.com/paddlepaddle/paddle /bin/bash
cd /work
python train.py
**Note: In order to reduce the size, vim is not installed in PaddlePaddle Docker image by default. You can edit the code in the container after executing ** `apt-get install -y vim` **(which installs vim for you) in the container.**
</br></br>
## Start PaddlePaddle Book tutorial with Docker
***
Use Docker to quickly launch a local Jupyter Notebook containing the PaddlePaddle official Book tutorial, which can be viewed on the web. PaddlePaddle Book is an interactive Jupyter Notebook for users and developers. If you want to learn more about deep learning, PaddlePaddle Book is definitely your best choice. You can read tutorials or create and share interactive documents with code, formulas, charts, and text.
We provide a Docker image that can run the PaddlePaddle Book directly, running directly:
`docker run -p 8888:8888 hub.baidubce.com/paddlepaddle/book`
Domestic users can use the following image source to speed up access:
`docker run -p 8888:8888 hub.baidubce.com/paddlepaddle/book`
Then enter the following URL in your browser:
`http://localhost:8888/`
It's that simple and bon voyage! For further questions, please refer to the [FAQ](#FAQ).
</br></br>
## Perform GPU training using Docker
***
In order to ensure that the GPU driver works properly in the image, we recommend using [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) to run the image. Don't forget to install the latest GPU drivers on your physical machine in advance.
`Nvidia-docker run -it -v $PWD:/work hub.baidubce.com/paddlepaddle/paddle:latest-gpu /bin/bash`
**Note: If you don't have nvidia-docker installed, you can try the following to mount the CUDA library and Linux devices into the Docker container:**
export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') \
$(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} \
${DEVICES} -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu
**About AVX:**
AVX is a set of CPU instructions that speeds up the calculation of PaddlePaddle. The latest PaddlePaddle Docker image is enabled by default for AVX compilation, so if your computer does not support AVX, you need to compile PaddlePaddle to no-avx version separately.
The following instructions can check if the Linux computer supports AVX:
`if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi`
If the output is No, you need to choose a mirror that uses no-AVX.
......@@ -230,7 +230,7 @@ MacOS本机直接通过源码编译的方式安装PaddlePaddle出现`[paddle/flu
+ 问题解答
使用cmake版本为3.4则可。自行编译建议GCC版本:4.8、5.4以及更高。
CMake我们支持3.10以上版本,但GPU编译时3.12/3.13/3.14版本存在官方[Bug](https://cmake.org/pipermail/cmake/2018-September/068195.html),我们建议您使用CMake3.16版本。自行编译建议GCC版本:4.8、5.4以及更高。
##### Q: `wget: command not found`
......
......@@ -16,8 +16,8 @@
<tbody>
<tr>
<td> CMake </td>
<td> 3.4 </td>
<td> </td>
<td> 3.10, 3.11, 3.15, 3.16(推荐),3.17 </td>
<td> 3.12/3.13/3.14 版本存在官方Bug,请跳过该版本</td>
<td> </td>
</tr>
<tr>
......@@ -128,11 +128,6 @@
<td> 是否支持GPU </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_DSO </td>
<td> 是否运行时动态加载CUDA动态库,而非静态加载CUDA动态库 </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_AVX </td>
<td> 是否编译含有AVX指令集的PaddlePaddle二进制文件 </td>
......@@ -535,4 +530,3 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} \
${DEVICES} -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu
......@@ -17,8 +17,8 @@
<tbody>
<tr>
<td> CMake </td>
<td> 3.4 </td>
<td> </td>
<td> 3.10, 3.11, 3.15, 3.16(Recommend),3.17 </td>
<td> There is an official bug in version 3.12/3.13/3.14, please skip this version </td>
<td> </td>
</tr>
<tr>
......@@ -129,11 +129,6 @@
<td> Whether to support GPU </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_DSO </td>
<td> whether to load CUDA dynamic libraries dynamically at runtime, instead of statically loading CUDA dynamic libraries. </td>
<td> ON </td>
</tr>
<tr>
<td> WITH_AVX </td>
<td> whether to compile PaddlePaddle binaries file containing the AVX instruction set </td>
......@@ -535,5 +530,3 @@ In order to ensure that the GPU driver works properly in the image, we recommend
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} \
${DEVICES} -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu
......@@ -25,7 +25,6 @@
wget http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
rpm -i nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
sudo apt-get install -y libnccl2=2.3.7-1+cuda9.0 libnccl-dev=2.3.7-1+cuda9.0
yum update -y
yum install -y libnccl-2.3.7-2+cuda9.0 libnccl-devel-2.3.7-2+cuda9.0 libnccl-static-2.3.7-2+cuda9.0
......@@ -89,7 +88,7 @@
> -it 与宿主机保持交互状态,`hub.baidubce.com/paddlepaddle/paddle:latest-dev` 使用名为`hub.baidubce.com/paddlepaddle/paddle:latest-dev`的镜像创建Docker容器,/bin/bash 进入容器后启动/bin/bash命令。
> 注意:hub.baidubce.com/paddlepaddle/paddle:latest-dev内部安装CUDA 8.0。
> 注意:hub.baidubce.com/paddlepaddle/paddle:latest-dev内部安装CUDA 10.0。
4. 进入Docker后进入paddle目录下:
......@@ -119,7 +118,7 @@
> 安装protobuf。
`apt install patchelf`
`yum install patchelf`
> 安装patchelf,PatchELF 是一个小而实用的程序,用于修改ELF可执行文件的动态链接器和RPATH。
......@@ -153,7 +152,7 @@
恭喜,至此您已完成PaddlePaddle的编译安装。您只需要进入Docker容器后运行PaddlePaddle,即可开始使用。更多Docker使用请参见[Docker官方文档](https://docs.docker.com)
> 注:PaddlePaddle Docker镜像为了减小体积,默认没有安装`vim`,您可以在容器中执行 `apt-get install -y vim` 来安装
> 注:PaddlePaddle Docker镜像为了减小体积,默认没有安装`vim`,您可以在容器中执行 `yum install -y vim` 来安装
<a name="ct_source"></a>
### **本机编译**
......@@ -206,7 +205,7 @@
* 这里特别提供`patchELF`的安装方法,其他的依赖可以使用`yum install`或者`pip install`/`pip3 install` 后跟依赖名称和版本安装:
`yum install patchelf`
> 不能使用apt安装的用户请参见patchElF github[官方文档](https://gist.github.com/ruario/80fefd174b3395d34c14)
> 不能使用yum安装的用户请参见patchElF github[官方文档](https://gist.github.com/ruario/80fefd174b3395d34c14)
7. 将PaddlePaddle的源码clone在当下目录下的Paddle的文件夹中,并进入Padde目录下:
......
......@@ -25,7 +25,6 @@
wget http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
rpm -i nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
sudo apt-get install -y libnccl2=2.3.7-1+cuda9.0 libnccl-dev=2.3.7-1+cuda9.0
yum update -y
yum install -y libnccl-2.3.7-2+cuda9.0 libnccl-devel-2.3.7-2+cuda9.0 libnccl-static-2.3.7-2+cuda9.0
......@@ -90,7 +89,7 @@ Please follow the steps below to install:
> -it keeps interaction with the host,`hub.baidubce.com/paddlepaddle/paddle:latest-dev` use the image named `hub.baidubce.com/paddlepaddle/paddle:latest-dev` to create Docker container, /bin/bash start the /bin/bash command after entering the container.
> Note: hub.baidubce.com/paddlepaddle/paddle:latest-dev internally install CUDA 8.0.
> Note: hub.baidubce.com/paddlepaddle/paddle:latest-dev internally install CUDA 10.0.
4. After entering Docker, go to the paddle directory: `cd paddle`
......@@ -119,7 +118,7 @@ Please follow the steps below to install:
> Install protobuf 3.1.0
`apt install patchelf`
`yum install patchelf`
> Installing patchelf, PatchELF is a small and useful program for modifying the dynamic linker and RPATH of ELF executables.
......@@ -154,7 +153,7 @@ Please follow the steps below to install:
Congratulations, now that you have successfully installed PaddlePaddle using Docker, you only need to run PaddlePaddle after entering the Docker container. For more Docker usage, please refer to the [official Docker documentation](https://docs.docker.com/).
> Note: In order to reduce the size, `vim` is not installed in PaddlePaddle Docker image by default. You can edit the code in the container after executing `apt-get install -y vim` in the container.
> Note: In order to reduce the size, `vim` is not installed in PaddlePaddle Docker image by default. You can edit the code in the container after executing `yum install -y vim` in the container.
<a name="ct_source"></a>
### **Local compilation**
......@@ -215,7 +214,7 @@ Congratulations, now that you have successfully installed PaddlePaddle using Doc
* Here is the installation method for `patchELF`. Other dependencies can be installed using `yum install` or `pip install`/`pip3 install` followed by the name and version:
`yum install patchelf`
> Users who can't use apt installation can refer to patchElF github [official documentation](https://gist.github.com/ruario/80fefd174b3395d34c14).
> Users who can't use yum installation can refer to patchElF github [official documentation](https://gist.github.com/ruario/80fefd174b3395d34c14).
7. Put the PaddlePaddle source cloned in the Paddle folder in the current directory and go to the Paddle directory:
......
......@@ -149,9 +149,10 @@
- a. 这里特别说明一下**CMake**的安装:
由于我们使用的是CMake3.4请根据以下步骤:
1. 从CMake[官方网站](https://cmake.org/files/v3.4/cmake-3.4.3-Darwin-x86_64.dmg)下载CMake镜像并安装
CMake我们支持3.10以上版本,推荐使用CMake3.16,请根据以下步骤安装:
1. 从CMake[官方网站](https://cmake.org/files/v3.16/cmake-3.16.0-Darwin-x86_64.dmg)下载CMake镜像并安装
2. 在控制台输入`sudo "/Applications/CMake.app/Contents/bin/cmake-gui" –install`
- b. 如果您不想使用系统默认的blas而希望使用自己安装的OPENBLAS请参见[FAQ](../FAQ.html/#OPENBLAS)
......
......@@ -86,7 +86,7 @@
> -it 与宿主机保持交互状态,`hub.baidubce.com/paddlepaddle/paddle:latest-dev` 使用名为`hub.baidubce.com/paddlepaddle/paddle:latest-dev`的镜像创建Docker容器,/bin/bash 进入容器后启动/bin/bash命令。
> 注意:hub.baidubce.com/paddlepaddle/paddle:latest-dev内部安装CUDA 8.0。
> 注意:hub.baidubce.com/paddlepaddle/paddle:latest-dev内部安装CUDA 10.0。
4. 进入Docker后进入paddle目录下:
......
......@@ -86,7 +86,7 @@ Please follow the steps below to install:
> -it keeps interaction with the host,`hub.baidubce.com/paddlepaddle/paddle:latest-dev` use the image named `hub.baidubce.com/paddlepaddle/paddle:latest-dev` to create Docker container, /bin/bash start the /bin/bash command after entering the container.
> Note: hub.baidubce.com/paddlepaddle/paddle:latest-dev internally install CUDA 8.0.
> Note: hub.baidubce.com/paddlepaddle/paddle:latest-dev internally install CUDA 10.0.
4. After entering Docker, enter the Paddle Directory:
......
......@@ -27,7 +27,7 @@
1. 安装必要的工具 cmake,git 以及 python:
> cmake 需要 3.5 及以上版本, 可在官网[下载](https://cmake.org/download/),并添加到环境变量中。
> cmake我们支持3.10以上版本,但GPU编译时3.12/3.13/3.14版本存在官方[Bug](https://cmake.org/pipermail/cmake/2018-September/068195.html),我们建议您使用CMake3.16版本,可在官网[下载](https://cmake.org/download/),并添加到环境变量中。
> python 需要 2.7 及以上版本, 可在官网[下载](https://www.python.org/download/releases/2.7/)。
......
......@@ -29,7 +29,7 @@ There is one compilation methods in Windows system:
1. Install the necessary tools i.e. cmake, git and python:
> Cmake requires version 3.5 and above, which can be downloaded from the [official website](https://cmake.org/download/) and added to the environment variable.
> CMake requires version 3.10 and above, but there are official [Bug](https://cmake.org/pipermail/cmake/2018-September/068195.html) versions of 3.12/3.13/3.14 when the GPU is compiled, we recommend that you use CMake3. 16 version, available on the official website [download] (https://cmake.org/download/), and add to the ring Environment variables.
> Python requires version 2.7 and above, which can be downloaded from the [official website](https://www.python.org/download/releases/2.7/).
......
......@@ -32,7 +32,7 @@
=================================
* 目前 **PaddlePaddle** 仅支持 **NVIDIA** 显卡的 **CUDA** 驱动
* 需要安装 `cuDNN <https://docs.nvidia.com/deeplearning/sdk/cudnn-install/>`_ ,版本要求 7.3+(For CUDA9/10)
* 需要安装 `cuDNN <https://docs.nvidia.com/deeplearning/sdk/cudnn-install/>`_ ,版本要求 7.6+(For CUDA9/10)
* 如果您需要 GPU 多卡模式,需要安装 `NCCL 2 <https://developer.nvidia.com/nccl/>`_
* 仅 Ubuntu/CentOS 支持 NCCL 2 技术
......
......@@ -32,7 +32,7 @@ The manuals will guide you to build and install PaddlePaddle on your 64-bit desk
=================================
* Currently, **PaddlePaddle** only supports **CUDA** driver of **NVIDIA** graphics card.
* You need to install `cuDNN <https://docs.nvidia.com/deeplearning/sdk/cudnn-install/>`_ , and version 7.3+ is required(For CUDA9/10)
* You need to install `cuDNN <https://docs.nvidia.com/deeplearning/sdk/cudnn-install/>`_ , and version 7.6+ is required(For CUDA9/10)
* If you need GPU multi-card mode, you need to install `NCCL 2 <https://developer.nvidia.com/nccl/>`_
* Only Ubuntu/CentOS support NCCL 2
......
......@@ -64,8 +64,8 @@
* 如果您的计算机有NVIDIA® GPU,请确保满足以下条件并且安装GPU版PaddlePaddle
* **CUDA 工具包10.0配合cuDNN v7.3+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **CUDA 工具包9.0配合cuDNN v7.3+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **CUDA 工具包10.0配合cuDNN v7.6+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **CUDA 工具包9.0配合cuDNN v7.6+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **GPU运算能力超过1.0的硬件设备**
您可参考NVIDIA官方文档了解CUDA和CUDNN的安装流程和配置方法,请见[CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/),[cuDNN](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/)
......
......@@ -65,8 +65,8 @@
* If your computer has NVIDIA® GPU, please make sure that the following conditions are met and install the GPU version of PaddlePaddle
* **CUDA toolkit 10.0 with cuDNN v7.3+(for multi card support, NCCL2.3.7 or higher)**
* **CUDA toolkit 9.0 with cuDNN v7.3+(for multi card support, NCCL2.3.7 or higher)**
* **CUDA toolkit 10.0 with cuDNN v7.6+(for multi card support, NCCL2.3.7 or higher)**
* **CUDA toolkit 9.0 with cuDNN v7.6+(for multi card support, NCCL2.3.7 or higher)**
* **Hardware devices with GPU computing power over 1.0**
You can refer to NVIDIA official documents for installation process and configuration method of CUDA and cudnn. Please refer to [CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/),[cuDNN](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/)
......
......@@ -64,8 +64,8 @@
* 如果您的计算机没有 NVIDIA® GPU,请安装CPU版的PaddlePaddle
* 如果您的计算机有 NVIDIA® GPU,并且满足以下条件,推荐安装GPU版的PaddlePaddle
* **CUDA 工具包10.0配合cuDNN v7.3+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **CUDA 工具包9.0配合cuDNN v7.3+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **CUDA 工具包10.0配合cuDNN v7.6+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **CUDA 工具包9.0配合cuDNN v7.6+(如需多卡支持,需配合NCCL2.3.7及更高)**
* **GPU运算能力超过1.0的硬件设备**
......
......@@ -64,8 +64,8 @@
* If your computer doesn't have NVIDIA® GPU, please install CPU version of PaddlePaddle
* If your computer has NVIDIA® GPU, and meet the following conditions, we command you to install PaddlePaddle
* **CUDA toolkit 10.0 with cuDNN v7.3+(for multi card support, NCCL2.3.7 or higher)**
* **CUDA toolkit 9.0 with cuDNN v7.3+(for multi card support, NCCL2.3.7 or higher)**
* **CUDA toolkit 10.0 with cuDNN v7.6+(for multi card support, NCCL2.3.7 or higher)**
* **CUDA toolkit 9.0 with cuDNN v7.6+(for multi card support, NCCL2.3.7 or higher)**
* **Hardware devices with GPU computing power over 1.0**
......
......@@ -17,7 +17,7 @@
检测您的机器是否安装我们支持的CUDA,cuDNN,具体地:
1. 在`/usr/local/` 及其子目录下寻找 `cuda/cuda8/cuda9` 目录下的`version.txt`文件(通常如果您以默认方式安装了CUDA)。 如果提示未找到CUDA请使用命令`find / -name version.txt`找到您所需要的CUDA目录下的“version.txt”路径,然后按照提示输入。
1. 在`/usr/local/` 及其子目录下寻找 `cuda/cuda8/cuda9/cuda10` 目录下的`version.txt`文件(通常如果您以默认方式安装了CUDA)。 如果提示未找到CUDA请使用命令`find / -name version.txt`找到您所需要的CUDA目录下的“version.txt”路径,然后按照提示输入。
2. 在`/usr` 及其子目录下寻找文件 `cudnn.h` , 如果您的cuDNN未安装在默认路径请使用命令`find / -name cudnn.h`寻找您希望使用的cuDNN版本的`cudnn.h`路径并按提示输入
如果未找到相应文件,则会安装CPU版本的PaddlePaddle
......
#
# Some common descriptions used in Paddle API docs
# You can copy the wordings here if that is suitable to your scenario.
#
common_args_en = """
x (Tensor): The input tensor, it's data type should be float32, float64, int32, int64.
y (Tensor): The input tensor, it's data type should be float32, float64, int32, int64.
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
dtype (str, optional): The data type of the output tensor, can be float32, float64, int32, int64.
param_attr (ParamAttr, optional): The parameter attribute for learnable weights(Parameter) of this layer. For more information, please refer to :ref:`api_fluid_ParamAttr`.
bias_attr (ParamAttr, optional): The parameter attribute for learnable bias(Bias) of this layer. For more information, please refer to :ref:`api_fluid_ParamAttr`.
label (Tensor): The label value corresponding to input, it's data type should be int32, int64.
learning_rate (Tensor|float): The learning rate, can be a Tensor or a float value. Default is 1e-03.
axis (int, optional): The axis along which to operate. Default is 0.
epsilon (float, optional): Small float added to denominator to avoid dividing by zero. Default is 1e-05.
is_test (bool, optional): A flag indicating whether execution is in test phase. Default is False, means not in test phase.
shape (Tensor|tuple|list): Shape of the Tensor. If shape is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If shape is Tensor, it should be an 1-D Tensor .
keep_dim (bool): Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true. Default is False.
filter_size (tuple|list|int): The size of convolving kernel. It can be a single integer or a tuple/list containing two integers, representing the height and width of the convolution window respectively. If it is a single integer, the height and width are equal to the integer.
padding (tuple|int): The padding size. It can be a single integer or a tuple containing two integers, representing the size of padding added to the height and width of the input. If it is a single integer, the both sides of padding are equal to the integer. Default is 0.
include_sublayers (bool, optional): Whether include the sublayers. If True, return list includes the sublayers weights. Default is True.
stride (tuple|int): The stride size. It can be a single integer or a tuple containing two integers, representing the strides of the convolution along the height and width. If it is a single integer, the height and width are equal to the integer. Default is 1.
groups (int, optional): The group number of convolution layer. When group=n, the input and convolution kernels are divided into n groups equally, the first group of convolution kernels and the first group of inputs are subjected to convolution calculation, the second group of convolution kernels and the second group of inputs are subjected to convolution calculation, ……, the nth group of convolution kernels and the nth group of inputs perform convolution calculations. Default is 1.
regularization (WeightDecayRegularizer, optional): The strategy of regularization. There are two method: :ref:`api_fluid_regularizer_L1Decay` 、 :ref:`api_fluid_regularizer_L2Decay` . If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
dilation (tuple|int): The dilation size. It can be a single integer or a tuple containing two integers, representing the height and width of dilation of the convolution kernel elements. If it is a single integer,the height and width of dilation are equal to the integer. Default is 1.
stop_gradient (bool, optional): A boolean that mentions whether gradient should flow. Default is True, means stop calculate gradients.
force_cpu (bool, optional): Whether force to store the output tensor in CPU memory. If force_cpu is False, the output tensor will be stored in running device memory, otherwise it will be stored to the CPU memory. Default is False.
data_format (str, optional): Specify the input data format, the output data format will be consistent with the input, which can be "NCHW" or "NHWC". N is batch size, C is channels, H is height, and W is width. Default is "NCHW".
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default is None, meaning there is no gradient clipping.
num_filters (int): The number of filter. It is as same as the output channals numbers.
dim (int, optional): A dimension along which to operate. Default is 0.
is_sparse (bool, optional): Whether use sparse updating. For more information, please refer to :ref:`api_guide_sparse_update_en` . If it’s True, it will ues sparse updating.
place (fluid.CPUPlace()|fluid.CUDAPlace(N)|None): This parameter represents which device the executor runs on, and N means the GPU's id. When this parameter is None, PaddlePaddle will set the default device according to its installation version. If Paddle is CPU version, the default device would be set to CPUPlace(). If Paddle is GPU version, the default device would be set to CUDAPlace(0). Default is None.
num_filters (int): the number of convolution kernels, is also the number of output channels.
"""
common_args_cn = """
x (Tensor) - 输入的 `Tensor` ,数据类型为:float32、float64、int32、int64。
y (Tensor) - 输入的 `Tensor` ,数据类型为:float32、float64、int32、int64。
name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
dtype (str,可选) - 输出 `Tensor` 的数据类型,支持int32、int64、float32、float64。
param_attr (ParamAttr,可选) – 该Layer的可学习的权重(Parameter)的参数属性。更多信息请参见 :ref:`cn_api_fluid_ParamAttr`。
bias_attr (ParamAttr,可选) - 该Layer的可学习的偏置(Bias)的参数属性。更多信息请参见 :ref:`cn_api_fluid_ParamAttr`。
label (Tensor) - 训练数据的标签,数据类型为:int32, int64。
learning_rate (Tensor|float) - 学习率,可以是一个 `Tensor` 或者是一个浮点数。默认值为1e-03.
axis (int,可选) - 指定对输入 `Tensor` 进行运算的轴。默认值为0。
epsilon (float,可选) - 添加到分母上的值以防止分母除0。默认值为1e-05。
is_test (bool,可选) - 用于表明是否在测试阶段执行。默认值为False,表示非测试阶段。
shape (Tensor|tuple|list) - `Tensor` 的形状。如果 `shape` 是一个列表或元组,则其元素应该是形状为[1]的整数或 `Tensor` 。 如果 `shape` 是 `Tensor` ,则它应该是1-D `Tensor`。
keep_dim (bool) - 是否在输出 `Tensor` 中保留减小的维度。如 `keep_dim` 为True,否则结果张量的维度将比输入张量小,默认值为False。
filter_size (tuple|list|int) - 卷积核大小。可以为单个整数或包含两个整数的元组或列表,分别表示卷积核的高和宽。如果为单个整数,表示卷积核的高和宽都等于该整数。
padding (tuple|int) – 填充大小。可以为单个整数或包含两个整数的元组,分别表示对输入高和宽两侧填充的大小。如果为单个整数,表示高和宽的填充都等于该整数。默认值为0。
include_sublayers (bool,可选) - 是否返回子层的参数。如果为True,返回的列表中包含子层的参数。默认值为True。
stride (tuple|int) - 步长大小。可以为单个整数或包含两个整数的元组,分别表示卷积沿着高和宽的步长。如果为单个整数,表示沿着高和宽的步长都等于该整数。默认值为1。
groups (int,可选) - 卷积的组数。当group=n,输入和卷积核分别平均分为n组,第一组卷积核和第一组输入进行卷积计算,第二组卷积核和第二组输入进行卷积计算,……,第n组卷积核和第n组输入进行卷积计算。默认值为11。
regularization (WeightDecayRegularizer,可选) - 正则化方法。支持两种正则化策略: :ref:`cn_api_fluid_regularizer_L1Decay` 、 :ref:`cn_api_fluid_regularizer_L2Decay` 。如果一个参数已经在 :ref:`cn_api_fluid_ParamAttr` 中设置了正则化,这里的正则化设置将被忽略;如果没有在 :ref:`cn_api_fluid_ParamAttr` 中设置正则化,这里的设置才会生效。默认值为None,表示没有正则化。
grad_clip (GradientClipBase,可选) – 梯度裁剪的策略,支持三种裁剪策略: :ref:`cn_api_fluid_clip_GradientClipByGlobalNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByValue` 。
dilation (tuple|int,可选) - 空洞大小。可以为单个整数或包含两个整数的元组,分别表示卷积核中的元素沿着高和宽的空洞。如果为单个整数,表示高和宽的空洞都等于该整数。默认值为1。
stop_gradient (bool,可选) - 提示是否应该停止计算梯度,默认值为True,表示停止计算梯度。
force_cpu (bool,可选) - 是否强制将输出Tensor写入CPU内存。如果为False,则将输出Tensor写入当前所在运算设备的内存,否则写入CPU内存中。默认为False。
data_format (str,可选) - 指定输入的数据格式,输出的数据格式将与输入保持一致,可以是"NCHW"和"NHWC"。N是批大小,C是通道数,H是高度,W是宽度。默认值为"NCHW"。
grad_clip (GradientClipBase,可选) – 梯度裁剪的策略,支持三种裁剪策略: :ref:`cn_api_fluid_clip_GradientClipByGlobalNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByValue` 。默认值为None,表示不使用梯度裁剪。
num_filters (int) - 卷积核的个数,与输出的通道数相同。
dim (int,可选) - 指定对输入Tensor进行运算的维度。默认值为0。
is_sparse (bool,可选) - 是否使用稀疏更新的方式,更多信息请参见 :ref:`api_guide_sparse_update` 。默认值为True,表示使用稀疏更新的方式。
place (fluid.CPUPlace()|fluid.CUDAPlace(N)|None) – 该参数表示Executor执行所在的设备,这里的N为GPU对应的ID。当该参数为None时,PaddlePaddle会根据其安装版本来设置默认设备。当PaddlePaddle是CPU版时,默认运行设备将会设置为 `fluid.CPUPlace()` ;当PaddlePaddle是GPU版本时,默认执行设备将会设置为 `fluid.CUDAPlace(0)` 。默认值为None。
num_filters (int) - 卷积核个数,同时也是输出的通道数。
"""
import argparse
import sys
import types
import os
import contextlib
def parse_arg():
parser = argparse.ArgumentParser()
parser.add_argument(
'--api_path',
type=str,
default='paddle.nn.functional.l1_loss',
help='the function/class path')
parser.add_argument(
'--is_class',
type=str,
default='False',
help='whether function or class, False means function')
return parser.parse_args()
def add_index(en_doc_review_dir, api_name):
stream = open(en_doc_review_dir + '.rst', 'a')
stream.write(' review_tmp/' + api_name + '.rst\n')
stream.close()
print('add index to ' + en_doc_review_dir + '.rst success')
def add_file(en_doc_review_dir, api_path, is_class=False):
api_path_list = api_path.split('.')
api_name = api_path_list[-1]
api_title = '_'.join(api_path_list[1:])
stream = open(en_doc_review_dir + '/' + api_name + '.rst', 'w')
stream.write('.. _api_' + api_title + ':\n')
stream.write('\n')
stream.write(api_name + '\n')
for i in range(max(9, len(api_name))):
stream.write('-')
stream.write('\n')
stream.write('\n')
if is_class == 'True':
stream.write('.. autoclass:: ' + api_path + '\n')
stream.write(' :members:\n')
stream.write(' :inherited-members:\n')
else:
stream.write('.. autofunction:: ' + api_path + '\n')
stream.write(' :noindex:\n')
stream.close()
print('add' + en_doc_review_dir + '/' + api_name + '.rst success')
def main():
args = parse_arg()
api_path = args.api_path
is_class = args.is_class
api_name = api_path.split('.')[-1]
fluid_doc_path = os.getcwd()
en_doc_review_dir = fluid_doc_path + '/doc/fluid/api/review_tmp'
add_index(en_doc_review_dir, api_name)
add_file(en_doc_review_dir, api_path, is_class)
if __name__ == '__main__':
main()
......@@ -7,14 +7,15 @@ if [ "$night" == "develop" ];then
wget -q https://paddle-wheel.bj.bcebos.com/0.0.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-0.0.0-cp27-cp27mu-linux_x86_64.whl
pip install -U paddlepaddle_gpu-0.0.0-cp27-cp27mu-linux_x86_64.whl
else
git clone https://github.com/PaddlePaddle/Paddle.git
mkdir Paddle/build && cd Paddle/build
cd Paddle/build
cmake .. -DWITH_GPU=ON -DWITH_COVERAGE=OFF -DWITH_TESTING=OFF -DCMAKE_BUILD_TYPE=Release
make -j`nproc`
pip install -U python/dist/paddlepaddle_gpu-0.0.0-cp27-cp27mu-linux_x86_64.whl
fi
for files in `echo $git_files`;do
cd /FluidDoc
grep "code-block" $files
if [ $? -eq 0 ] ;then
echo $files|grep 'doc/fluid/api_cn/.*/.*.rst'
......
#!/usr/bin/env bash
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#=================================================
# Utils
#=================================================
set -ex
if [ -z ${BRANCH} ]; then
BRANCH="develop"
fi
BENCHMARK_ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}")/.." && pwd )"
echo ${BENCHMARK_ROOT}
function prepare_env(){
# Install tensorflow and other packages
pip install pre-commit==1.21 pylint==1.9.5 pytest==4.6.9
}
function abort(){
echo "Your change doesn't follow benchmark's code style." 1>&2
echo "Please use pre-commit to check what is wrong." 1>&2
exit 1
}
function check_style(){
trap 'abort' 0
pre-commit install
commit_files=on
for file_name in `git diff --numstat upstream/$BRANCH| awk '{print $NF}'`;do
if ! pre-commit run --files $file_name ; then
git diff
commit_files=off
fi
done
if [ $commit_files == 'off' ];then
echo "code format error"
exit 1
fi
trap 0
}
prepare_env
check_style
......@@ -6,12 +6,12 @@ for API_FILE in ${API_FILES[*]}; do
if [ "${API_CHANGE}" ];then
approval_line=`curl -H "Authorization: token ${GITHUB_API_TOKEN}" https://api.github.com/repos/PaddlePaddle/FluidDoc/pulls/${GIT_PR_ID}/reviews?per_page=10000`
if [ "${API_FILE}" == "doc/fluid" ];then
APPROVALS=`echo ${approval_line}|python ./scripts/check_pr_approval.py 1 31623103 2870059 27208573 28379894`
APPROVALS=`echo ${approval_line}|python ./scripts/check_pr_approval.py 1 2870059 27208573 29231 28379894 23093488 11935832`
fi
fi
if [ "${APPROVALS}" == "FALSE" ]; then
if [ "${API_FILE}" == "doc/fluid" ];then
echo "You must have one TPM (saxon-zh or Boyan-Liu or swtkiwi or Heeenrrry) approval for the api change! ${API_FILE} for the management reason of API interface and API document."
echo "You must have one TPM (saxon-zh or swtkiwi or jzhang533 or Heeenrrry or dingjiaweiww or TCChenlong) approval for the api change! ${API_FILE} for the management reason of API interface and API document."
fi
exit 1
fi
......
......@@ -2,7 +2,12 @@
DIR_PATH="/FluidDoc"
/bin/bash ${DIR_PATH}/scripts/check_api_cn.sh
/bin/bash ${DIR_PATH}/scripts/check_code.sh
if [ $? -ne 0 ];then
echo "code format error"
exit 1
fi
/bin/bash -x ${DIR_PATH}/scripts/check_api_cn.sh
if [ $? -ne 0 ];then
exit 1
fi
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册