未验证 提交 6c07aac7 编写于 作者: S SunAhong1993 提交者: GitHub

去除Static部分代码 (#600)

* fix the code

* fix the visit_tuple

* Update stargan.md

* Update ultra_light_fast_generic_face_detector.md

* fix the docs

* remove static

* fix

* fix

* fix

* fix the docs
Co-authored-by: Nchanningss <chen_lingchi@163.com>
上级 8f3f7c16
......@@ -2,13 +2,13 @@
该文档梳理了计算loss相关的PyTorch-PaddlePaddle API映射列表。
| 序号 | PyTorch API | PaddlePaddle API | 备注 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | [torch.nn.L1Loss](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html?highlight=l1loss#torch.nn.L1Loss) | [paddle.nn.loss.L1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/L1Loss_cn.html#l1loss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 2 | [torch.nn.MSELoss](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mseloss#torch.nn.MSELoss) | [paddle.nn.MSELoss](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mseloss#torch.nn.MSELoss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 3 | [torch.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss) | [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss) | [差异对比](torch.nn.CrossEntropyLoss.md) |
| 4 | [torch.nn.KLDivLoss](https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html?highlight=kldivloss#torch.nn.KLDivLoss) | [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/KLDivLoss_cn.html) | [差异对比](torch.nn.KLDivLoss.md) |
| 5 | [torch.nn.BCELoss](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html?highlight=bceloss#torch.nn.BCELoss) | [paddle.nn.BCELoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/BCELoss_cn.html#bceloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 6 | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html?highlight=bcewithlogitsloss#torch.nn.BCEWithLogitsLoss) | [paddle.nn.BCEWithLogitsLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/BCEWithLogitsLoss_cn.html#bcewithlogitsloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 7 | [torch.nn.SmoothL1Loss](https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html?highlight=torch%20nn%20smoothl1loss#torch.nn.SmoothL1Loss) | [paddle.nn.SmoothL1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/SmoothL1Loss_cn.html#smoothl1loss) | 功能一致,参数名不一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 1 | [torch.nn.L1Loss](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html?highlight=l1loss#torch.nn.L1Loss) | [paddle.nn.L1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/loss/L1Loss_cn.html#l1loss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 2 | [torch.nn.MSELoss](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mseloss#torch.nn.MSELoss) | [paddle.nn.MSELoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MSELoss_cn.html#mseloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 3 | [torch.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss) | [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/CrossEntropyLoss_cn.html#crossentropyloss) | [差异对比](torch.nn.CrossEntropyLoss.md) |
| 4 | [torch.nn.KLDivLoss](https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html?highlight=kldivloss#torch.nn.KLDivLoss) | [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/KLDivLoss_cn.html#kldivloss) | [差异对比](torch.nn.KLDivLoss.md) |
| 5 | [torch.nn.BCELoss](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html?highlight=bceloss#torch.nn.BCELoss) | [paddle.nn.BCELoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BCELoss_cn.html#bceloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 6 | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html?highlight=bcewithlogitsloss#torch.nn.BCEWithLogitsLoss) | [paddle.nn.BCEWithLogitsLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BCEWithLogitsLoss_cn.html#bcewithlogitsloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 7 | [torch.nn.SmoothL1Loss](https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html?highlight=torch%20nn%20smoothl1loss#torch.nn.SmoothL1Loss) | [paddle.nn.SmoothL1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/SmoothL1Loss_cn.html#smoothl1loss) | 功能一致,参数名不一致,PyTroch存在废弃参数`size_average``reduce`。 |
***持续更新...***
......@@ -7,7 +7,7 @@ torch.nn.CrossEntropyLoss(weight=None,
reduce=None,
reduction='mean')
```
### [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss)
### [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/CrossEntropyLoss_cn.html#crossentropyloss)
```python
paddle.nn.CrossEntropyLoss(weight=None,
ignore_index=-100,
......
......@@ -7,7 +7,7 @@ torch.nn.KLDivLoss(size_average=None,
log_target=False)
```
### [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/KLDivLoss_cn.html)
### [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/KLDivLoss_cn.html#kldivloss)
```python
paddle.nn.KLDivLoss(reduction='mean')
```
......
......@@ -9,7 +9,7 @@ torch.nn.AvgPool1d(kernel_size,
count_include_pad=True)
```
### [paddle.nn.AvgPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/AvgPool1D_cn.html#avgpool1d)
### [paddle.nn.AvgPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/AvgPool1D_cn.html#avgpool1d)
```python
paddle.nn.AvgPool1D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.AvgPool2d(kernel_size,
divisor_override=None)
```
### [paddle.nn.AvgPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/AvgPool2D_cn.html#avgpool2d)
### [paddle.nn.AvgPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/AvgPool2D_cn.html#avgpool2d)
```python
paddle.nn.AvgPool2D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.AvgPool3d(kernel_size,
divisor_override=None)
```
### [paddle.nn.AvgPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/AvgPool3D_cn.html#avgpool3d)
### [paddle.nn.AvgPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/AvgPool3D_cn.html#avgpool3d)
```python
paddle.nn.AvgPool3D(kernel_size,
......
......@@ -7,7 +7,7 @@ torch.nn.BatchNorm1d(num_features,
affine=True,
track_running_stats=True)
```
### [paddle.nn.BatchNorm1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/norm/BatchNorm1D_cn.html#batchnorm1d)
### [paddle.nn.BatchNorm1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BatchNorm1D_cn.html#batchnorm1d)
```python
paddle.nn.BatchNorm1D(num_features,
momentum=0.9,
......
......@@ -7,7 +7,7 @@ torch.nn.BatchNorm2d(num_features,
affine=True,
track_running_stats=True)
```
### [paddle.nn.BatchNorm2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/norm/BatchNorm2D_cn.html#batchnorm2d)
### [paddle.nn.BatchNorm2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BatchNorm2D_cn.html#batchnorm2d)
```python
paddle.nn.BatchNorm2D(num_features,
momentum=0.9,
......
......@@ -7,7 +7,7 @@ torch.nn.BatchNorm3d(num_features,
affine=True,
track_running_stats=True)
```
### [paddle.nn.BatchNorm3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/norm/BatchNorm3D_cn.html#batchnorm3d)
### [paddle.nn.BatchNorm3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BatchNorm3D_cn.html#batchnorm3d)
```python
paddle.nn.BatchNorm3D(num_features,
momentum=0.9,
......
......@@ -4,7 +4,7 @@
torch.nn.ConstantPad1d(padding, value)
```
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad1D_cn.html#pad1d)
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad1D_cn.html#pad1d)
```python
paddle.nn.Pad1D(padding, mode='constant', value=0.0, data_format='NCL', name=None)
```
......
......@@ -4,7 +4,7 @@
torch.nn.ConstantPad2d(padding, value)
```
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad2D_cn.html#pad2d)
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad2D_cn.html#pad2d)
```python
paddle.nn.Pad2D(padding, mode='constant', value=0.0, data_format='NCHW', name=None)
```
......
......@@ -4,7 +4,7 @@
torch.nn.ConstantPad3d(padding, value)
```
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad3D_cn.html#pad3d)
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad3D_cn.html#pad3d)
```python
paddle.nn.Pad3D(padding, mode='constant', value=0.0, data_format='NCDHW', name=None)
```
......
......@@ -13,7 +13,7 @@ torch.nn.Conv1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv1D_cn.html#conv1d)
### [paddle.nn.Conv1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv1D_cn.html#conv1d)
```python
paddle.nn.Conv1D(in_channels,
......@@ -37,7 +37,7 @@ paddle.nn.Conv1D(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -13,7 +13,7 @@ torch.nn.Conv2d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv2D_cn.html#conv2d)
### [paddle.nn.Conv2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv2D_cn.html#conv2d)
```python
paddle.nn.Conv2D(in_channels,
......@@ -37,7 +37,7 @@ paddle.nn.Conv2D(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding的设置
***PyTorch***`padding`只能支持list或tuple类型。它可以有3种格式:
......
......@@ -13,7 +13,7 @@ torch.nn.Conv3d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv3D_cn.html#conv3d)
### [paddle.nn.Conv3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv3D_cn.html#conv3d)
```python
paddle.nn.Conv3D(in_channels,
......@@ -37,7 +37,7 @@ paddle.nn.Conv3D(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding的设置
***PyTorch***`padding`只能支持list或tuple类型。它可以有3种格式:
(1)包含4个二元组:\[\[0,0\], \[0,0\], \[padding_depth_front, padding_depth_back\], \[padding_height_top, padding_height_bottom\], \[padding_width_left, padding_width_right\]\],其中每个元组都可使用整数值替换,代表元组中的2个值相等;
......
......@@ -13,7 +13,7 @@ torch.nn.ConvTranspose1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv1DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv1DTranspose_cn.html#conv1dtranspose)
### [paddle.nn.Conv1DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv1DTranspose_cn.html#conv1dtranspose)
```python
paddle.nn.Conv1DTranspose(in_channels,
out_channels,
......@@ -34,7 +34,7 @@ paddle.nn.Conv1DTranspose(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding大小的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -13,7 +13,7 @@ torch.nn.ConvTranspose1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv2DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv2DTranspose_cn.html#conv2dtranspose)
### [paddle.nn.Conv2DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv2DTranspose_cn.html#conv2dtranspose)
```python
paddle.nn.Conv2DTranspose(in_channels,
out_channels,
......@@ -34,7 +34,7 @@ paddle.nn.Conv2DTranspose(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding大小的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -13,7 +13,7 @@ torch.nn.ConvTranspose1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv3DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv3DTranspose_cn.html#conv3dtranspose)
### [paddle.nn.Conv3DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv3DTranspose_cn.html#conv3dtranspose)
```python
paddle.nn.Conv2DTranspose(in_channels,
out_channels,
......@@ -34,7 +34,7 @@ paddle.nn.Conv2DTranspose(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding大小的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -4,7 +4,7 @@
torch.nn.Dropout(p=0.5, inplace=False)
```
### [paddle.nn.Dropout](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Dropout_cn.html#dropout)
### [paddle.nn.Dropout](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Dropout_cn.html#dropout)
```python
paddle.nn.Dropout(p=0.5, axis=None, mode="upscale_in_train”, name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.Dropout2d(p=0.5, inplace=False)
```
### [paddle.nn.Dropout2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Dropout2D_cn.html#dropout2d)
### [paddle.nn.Dropout2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Dropout2D_cn.html#dropout2d)
```python
paddle.nn.Dropout2D(p=0.5, data_format='NCHW', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.Dropout3d(p=0.5, inplace=False)
```
### [paddle.nn.Dropout3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Dropout3D_cn.html#dropout3d)
### [paddle.nn.Dropout3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Dropout3D_cn.html#dropout3d)
```python
paddle.nn.Dropout3D(p=0.5, data_format='NCDHW', name=None)
```
......
......@@ -9,7 +9,7 @@ torch.nn.Embedding(num_embeddings,
scale_grad_by_freq=False,
sparse=False)
```
### [paddle.nn.Embedding](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Embedding_cn.html#embedding)
### [paddle.nn.Embedding](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Embedding_cn.html#embedding)
```python
paddle.nn.Embedding(num_embeddings,
embedding_dim,
......
......@@ -10,7 +10,7 @@ torch.nn.GRU(input_size,
bidirectional=False)
```
### [paddle.nn.GRU](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/rnn/GRU_cn.html#gru)
### [paddle.nn.GRU](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/GRU_cn.html#gru)
```python
paddle.nn.GRU(input_size,
hidden_size,
......@@ -33,4 +33,4 @@ paddle.nn.GRU(input_size,
### 功能差异
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_ih_attr`/`weight_hh_attr`/`bias_ih_attr`/`bias_hh_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_ih_attr`/`bias_hh_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_ih_attr`/`weight_hh_attr`/`bias_ih_attr`/`bias_hh_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_ih_attr`/`bias_hh_attr`设置为bool类型与PyTorch的作用一致。
......@@ -11,7 +11,7 @@ torch.nn.LSTM(input_size,
proj_size=0)
```
### [paddle.nn.LSTM](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/rnn/LSTM_cn.html#lstm)
### [paddle.nn.LSTM](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/LSTM_cn.html#lstm)
```python
paddle.nn.LSTM(input_size,
hidden_size,
......
......@@ -5,7 +5,7 @@
torch.nn.Linear(in_features, out_features, bias=True)
```
### [paddle.nn.Linear](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Linear_cn.html#linear)
### [paddle.nn.Linear](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Linear_cn.html#linear)
```python
paddle.nn.Linear(in_features, out_features, weight_attr=None, bias_attr=None, name=None)
......@@ -15,4 +15,4 @@ torch.nn.Linear(in_features, out_features, bias=True)
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
......@@ -10,7 +10,7 @@ torch.nn.MaxPool1d(kernel_size,
ceil_mode=False)
```
### [paddle.nn.MaxPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/MaxPool1D_cn.html#maxpool1d)
### [paddle.nn.MaxPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MaxPool1D_cn.html#maxpool1d)
```python
paddle.nn.MaxPool1D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.MaxPool2d(kernel_size,
ceil_mode=False)
```
### [paddle.nn.MaxPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/MaxPool2D_cn.html#maxpool2d)
### [paddle.nn.MaxPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MaxPool2D_cn.html#maxpool2d)
```python
paddle.nn.MaxPool2D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.MaxPool3d(kernel_size,
ceil_mode=False)
```
### [paddle.nn.MaxPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/MaxPool3D_cn.html#maxpool3d)
### [paddle.nn.MaxPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MaxPool3D_cn.html#maxpool3d)
```python
paddle.nn.MaxPool3D(kernel_size,
......
......@@ -4,7 +4,7 @@
torch.nn.ReflectionPad1d(padding)
```
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad1D_cn.html#pad1d)
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad1D_cn.html#pad1d)
```python
paddle.nn.Pad1D(padding, mode='constant', value=0.0, data_format='NCL', name=None)
```
......
......@@ -4,7 +4,7 @@
torch.nn.ReflectionPad2d(padding)
```
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad2D_cn.html#pad2d)
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad2D_cn.html#pad2d)
```python
paddle.nn.Pad2D(padding, mode='constant', value=0.0, data_format='NCHW', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.ReplicationPad1d(padding)
```
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad1D_cn.html#pad1d)
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad1D_cn.html#pad1d)
```python
paddle.nn.Pad1D(padding, mode='constant', value=0.0, data_format='NCL', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.ReplicationPad2d(padding)
```
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad2D_cn.html#pad2d)
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad2D_cn.html#pad2d)
```python
paddle.nn.Pad2D(padding, mode='constant', value=0.0, data_format='NCHW', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.ReplicationPad3d(padding)
```
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad3D_cn.html#pad3d)
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad3D_cn.html#pad3d)
```python
paddle.nn.Pad3D(padding, mode='constant', value=0.0, data_format='NCDHW', name=None)
```
......
......@@ -6,7 +6,7 @@ torch.nn.Upsample(size=None,
mode='nearest',
align_corners=False)
```
### [paddle.nn.Upsample](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Upsample_cn.html#upsample)
### [paddle.nn.Upsample](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Upsample_cn.html#upsample)
```python
paddle.nn.Upsample(size=None,
scale_factor=None,
......
......@@ -12,7 +12,7 @@ torch.arange(start=0,
device=None,
requires_grad=False)
```
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/arange_cn.html#arange)
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/arange_cn.html#arange)
```python
paddle.arange(start=0,
end=None,
......
......@@ -3,7 +3,7 @@
```python
torch.bernoulli(input, *, generator=None, out=None)
```
### [paddle.bernoulli](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/bernoulli_cn.html#bernoulli)
### [paddle.bernoulli](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/bernoulli_cn.html#bernoulli)
```python
paddle.bernoulli(x, name=None)
```
......
......@@ -12,7 +12,7 @@ torch.empty(*size,
pin_memory=False)
```
### [paddle.empty](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/empty_cn.html#empty)
### [paddle.empty](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/empty_cn.html#empty)
```python
paddle.empty(shape,
......
......@@ -12,7 +12,7 @@ torch.eye(n,
requires_grad=False)
```
### [paddle.eye](https://pytorch.org/docs/stable/generated/torch.eye.html?highlight=eye#torch.eye)
### [paddle.eye](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/eye_cn.html#eye)
```python
paddle.eye(num_rows,
num_columns=None,
......
......@@ -5,7 +5,7 @@
torch.from_numpy(ndarray)
```
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/to_tensor_cn.html#to-tensor)
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/to_tensor_cn.html#to-tensor)
```python
paddle.to_tensor(data,
......
......@@ -12,7 +12,7 @@ torch.full(size,
requires_grad=False)
```
### [paddle.full](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/full_cn.html#full)
### [paddle.full](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/full_cn.html#full)
```python
paddle.full(shape,
fill_value,
......
......@@ -12,7 +12,7 @@ torch.full_like(input,
memory_format=torch.preserve_format)
```
### [paddle.full_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/full_like_cn.html#full-like)
### [paddle.full_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/full_like_cn.html#full-like)
```python
paddle.full_like(x, fill_value, dtype=None, name=None)
......
......@@ -5,7 +5,7 @@
torch.gather(input, dim, index, *, sparse_grad=False, out=None)
```
### [paddle.gather](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/manipulation/gather_cn.html#gather)
### [paddle.gather](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/gather_cn.html#gather)
```python
paddle.gather(x, index, axis=None, name=None)
......
......@@ -12,7 +12,7 @@ torch.linspace(start,
requires_grad=False)
```
### [paddle.linspace](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/layers/linspace_cn.html#linspace)
### [paddle.linspace](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/linspace_cn.html#linspace)
```python
paddle.linspace(start,
stop,
......
......@@ -8,7 +8,7 @@ torch.load(f,
**pickle_load_args)
```
### [paddle.load](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/framework/io/load_cn.html#load)
### [paddle.load](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/load_cn.html#load)
```python
paddle.load(path, **configs)
......
......@@ -3,7 +3,7 @@
```python
torch.multinomial(input, num_samples, replacement=False, *, generator=None, out=None)
```
### [paddle.multinomial](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/multinomial_cn.html#multinomial)
### [paddle.multinomial](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/multinomial_cn.html#multinomial)
```python
paddle.multinomial(x, num_samples=1, replacement=False, name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.narrow(input, dim, start, length)
```
### [paddle.slice](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/layers/slice_cn.html#slice)
### [paddle.slice](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/slice_cn.html#slice)
```python
paddle.slice(input, axes, starts, ends)
```
......
......@@ -3,7 +3,7 @@
```python
torch.normal(mean, std, *, generator=None, out=None)
```
### [paddle.normal](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/normal_cn.html#normal)
### [paddle.normal](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/normal_cn.html#normal)
```python
paddle.normal(mean=0.0, std=1.0, shape=None, name=None)
```
......
......@@ -11,7 +11,7 @@ torch.ones(*size,
requires_grad=False)
```
### [paddle.ones](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/ones_cn.html#ones)
### [paddle.ones](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ones_cn.html#ones)
```python
paddle.ones(shape,
......
......@@ -11,7 +11,7 @@ torch.ones_like(input,
memory_format=torch.preserve_format)
```
### [paddle.ones_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/ones_like_cn.html#ones-like)
### [paddle.ones_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ones_like_cn.html#ones-like)
```python
paddle.ones_like(x, dtype=None, name=None)
......
......@@ -10,7 +10,7 @@ torch.rand(*size,
requires_grad=False)
```
### [paddle.rand](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/rand_cn.html#rand)
### [paddle.rand](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/rand_cn.html#rand)
```python
paddle.rand(shape,
......
......@@ -13,7 +13,7 @@ torch.randint(low=0,
requires_grad=False)
```
### [paddle.randint](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/randint_cn.html#randint)
### [paddle.randint](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/randint_cn.html#randint)
```python
paddle.randint(low=0,
high=None,
......
......@@ -10,7 +10,7 @@ torch.randn(*size,
requires_grad=False)
```
### [paddle.randn](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/randn_cn.html#randn)
### [paddle.randn](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/randn_cn.html#randn)
```python
paddle.randn(shape,
......
......@@ -11,7 +11,7 @@ torch.randperm(n,
requires_grad=False,
pin_memory=False)
```
### [paddle.randperm](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/randperm_cn.html#randperm)
### [paddle.randperm](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/randperm_cn.html#randperm)
```python
paddle.randperm(n, dtype='int64', name=None)
```
......
......@@ -12,7 +12,7 @@ torch.range(start=0,
device=None,
requires_grad=False)
```
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/arange_cn.html#arange)
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/arange_cn.html#arange)
```python
paddle.arange(start=0,
end=None,
......
......@@ -8,7 +8,7 @@ torch.save(obj,
pickle_protocol=2)
```
### [paddle.save](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/framework/io/save_cn.html#save)
### [paddle.save](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/save_cn.html#save)
```python
paddle.save(obj, path, pickle_protocol=2)
......
......@@ -9,7 +9,7 @@ torch.tensor(data,
pin_memory=False)
```
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/to_tensor_cn.html#to-tensor)
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/to_tensor_cn.html#to-tensor)
```python
paddle.to_tensor(data,
......
......@@ -5,7 +5,7 @@
torch.transpose(input, dim0, dim1)
```
### [paddle.transpose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/layers/transpose_cn.html#transpose)
### [paddle.transpose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/transpose_cn.html#transpose)
```python
paddle.transpose(x, perm, name=None)
......
......@@ -11,7 +11,7 @@ torch.zeros(*size,
requires_grad=False)
```
### [paddle.zeros](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/zeros_cn.html#zeros)
### [paddle.zeros](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/zeros_cn.html#zeros)
```python
paddle.zeros(shape,
......
......@@ -11,7 +11,7 @@ torch.zeros_like(input,
memory_format=torch.preserve_format)
```
### [paddle.zeros_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/zeros_like_cn.html#zeros-like)
### [paddle.zeros_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/zeros_like_cn.html#zeros-like)
```python
paddle.zeros_like(x, dtype=None, name=None)
......
......@@ -2,14 +2,14 @@
该文档梳理了与数据处理、分布式处理等相关的PyTorch-PaddlePaddle API映射列表。
| 序号 | PyTorch API | PaddlePaddle API | 备注 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | [torch.nn.DataParallel](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html?highlight=dataparallel#torch.nn.DataParallel) | [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dygraph/parallel/DataParallel_cn.html#dataparallel) | [差异对比](torch.nn.DataParallel.md) |
| 2 | [torch.nn.parameter.Parameter](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html?highlight=torch%20nn%20parameter#torch.nn.parameter.Parameter) | [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/ce2bdb0afdc2a09a127e8d9aa394c8b00a877364/python/paddle/fluid/layers/tensor.py#L77) | [差异对比](torch.nn.parameter.Parameter.md) |
| 1 | [torch.nn.DataParallel](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html?highlight=dataparallel#torch.nn.DataParallel) | [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/DataParallel_cn.html#dataparallel) | [差异对比](torch.nn.DataParallel.md) |
| 2 | [torch.nn.parameter.Parameter](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html?highlight=torch%20nn%20parameter#torch.nn.parameter.Parameter) | [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/release/2.1/python/paddle/fluid/layers/tensor.py#L77) | [差异对比](torch.nn.parameter.Parameter.md) |
| 3 | [torch.nn.utils.clip_grad_value_](https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_value_.html?highlight=clip_grad_value_#torch.nn.utils.clip_grad_value_) | 无对应实现 | [组合实现](torch.nn.utils.clip_grad_value_.md) |
| 4 | [torch.utils.data.DataLoader](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader) | [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/reader/DataLoader_cn.html#dataloader) | [差异对比](torch.utils.data.DataLoader.md) |
| 4 | [torch.utils.data.DataLoader](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader) | [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/DataLoader_cn.html#dataloader) | [差异对比](torch.utils.data.DataLoader.md) |
| 5 | [torch.utils.data.random_split](https://pytorch.org/docs/stable/data.html?highlight=random_split#torch.utils.data.random_split) | 无对应实现 | [组合实现](torch.utils.data.random_split.md) |
| 6 | [torch.utils.data.distributed.DistributedSampler](https://pytorch.org/docs/stable/data.html?highlight=distributedsampler#torch.utils.data.distributed.DistributedSampler) | 无对应实现 | [组合实现](torch.utils.data.distributed.DistributedSampler.md) |
| 7 | [torch.utils.data.Dataset](https://pytorch.org/docs/stable/data.html?highlight=torch%20utils%20data%20dataset#torch.utils.data.Dataset) | [paddle.io.Dataset](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/dataset/Dataset_cn.html#dataset) | 功能一致 |
| 8 | [torch.utils.data.BatchSampler](https://pytorch.org/docs/stable/data.html?highlight=batchsampler#torch.utils.data.BatchSampler) | [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/batch_sampler/BatchSampler_cn.html#batchsampler) | [差异对比](torch.utils.data.BatchSampler.md) |
| 9 | [torch.utils.data.Sampler](https://pytorch.org/docs/stable/data.html?highlight=sampler#torch.utils.data.Sampler) | [paddle.io.Sampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/sampler/Sampler_cn.html#sampler) | 功能一致 |
| 7 | [torch.utils.data.Dataset](https://pytorch.org/docs/stable/data.html?highlight=torch%20utils%20data%20dataset#torch.utils.data.Dataset) | [paddle.io.Dataset](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/Dataset_cn.html#dataset) | 功能一致 |
| 8 | [torch.utils.data.BatchSampler](https://pytorch.org/docs/stable/data.html?highlight=batchsampler#torch.utils.data.BatchSampler) | [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/BatchSampler_cn.html#batchsampler) | [差异对比](torch.utils.data.BatchSampler.md) |
| 9 | [torch.utils.data.Sampler](https://pytorch.org/docs/stable/data.html?highlight=sampler#torch.utils.data.Sampler) | [paddle.io.Sampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/Sampler_cn.html#sampler) | 功能一致 |
***持续更新...***
......@@ -4,7 +4,7 @@
torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)
```
### [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dygraph/parallel/DataParallel_cn.html#dataparallel)
### [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/DataParallel_cn.html#dataparallel)
```python
paddle.DataParallel(layers, strategy=None, comm_buffer_size=25, last_comm_buffer_size=1)
```
......
......@@ -4,7 +4,7 @@
torch.nn.parameter.Parameter(data, requires_grad=True)
```
## [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/ce2bdb0afdc2a09a127e8d9aa394c8b00a877364/python/paddle/fluid/layers/tensor.py#L77)
## [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/release/2.1/python/paddle/fluid/layers/tensor.py#L77)
```python
paddle.create_parameter(shape,
dtype,
......
......@@ -4,7 +4,7 @@
torch.utils.data.BatchSampler(sampler, batch_size, drop_last)
```
### [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/batch_sampler/BatchSampler_cn.html#batchsampler)
### [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/BatchSampler_cn.html#batchsampler)
```python
paddle.io.BatchSampler(dataset=None, sampler=None, shuffle=Fasle, batch_size=1, drop_last=False)
```
......
......@@ -18,7 +18,7 @@ torch.utils.data.DataLoader(dataset,
persistent_workers=False)
```
### [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/reader/DataLoader_cn.html#dataloader)
### [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/DataLoader_cn.html#dataloader)
```python
paddle.io.DataLoader(dataset,
feed_list=None,
......
......@@ -3,20 +3,20 @@
该文档梳理了与视觉处理相关的PyTorch-PaddlePaddle API映射列表。
| 序号 | PyTorch API | PaddlePaddle API | 备注 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------- |
| 1 | [torchvision.transforms.Compose](https://pytorch.org/vision/stable/transforms.html?highlight=compose#torchvision.transforms.Compose) | [paddle.vision.transforms.Compose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Compose_cn.html#compose) | 功能一致 |
| 1 | [torchvision.transforms.Compose](https://pytorch.org/vision/stable/transforms.html?highlight=compose#torchvision.transforms.Compose) | [paddle.vision.transforms.Compose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Compose_cn.html#compose) | 功能一致 |
| 2 | [torchvision.transforms.ToPILImage](https://pytorch.org/vision/stable/transforms.html?highlight=topilimage#torchvision.transforms.ToPILImage) | 无对应实现 | [组合实现](torchvision.transforms.ToPILImage.md) |
| 3 | [torchvision.transforms.Resize](https://pytorch.org/vision/stable/transforms.html?highlight=resize#torchvision.transforms.Resize) | [paddle.vision.transforms.Resize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Resize_cn.html#resize) | 功能一致 |
| 4 | [torchvision.transforms.ToTensor](https://pytorch.org/vision/stable/transforms.html?highlight=totensor#torchvision.transforms.ToTensor) | [paddle.vision.transforms.ToTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/ToTensor_cn.html#totensor) | 功能一致 |
| 5 | [torchvision.transforms.RandomHorizontalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomhorizontalflip#torchvision.transforms.RandomHorizontalFlip) | [paddle.vision.transforms.RandomHorizontalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomHorizontalFlip_cn.html#randomhorizontalflip) | 功能一致 |
| 6 | [torchvision.transforms.CenterCrop](https://pytorch.org/vision/stable/transforms.html?highlight=centercrop#torchvision.transforms.CenterCrop) | [paddle.vision.transforms.CenterCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/CenterCrop_cn.html#centercrop) | 功能一致 |
| 7 | [torchvision.transforms.ColorJitter](https://pytorch.org/vision/stable/transforms.html?highlight=colorjitter#torchvision.transforms.ColorJitter) | [paddle.vision.transforms.ColorJitter](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/ColorJitter_cn.html#colorjitter) | 功能一致 |
| 8 | [torchvision.transforms.Grayscale](https://pytorch.org/vision/stable/transforms.html?highlight=grayscale#torchvision.transforms.Grayscale) | [paddle.vision.transforms.Grayscale](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Grayscale_cn.html#grayscale) | 功能一致 |
| 9 | [torchvision.transforms.Normalize](https://pytorch.org/vision/stable/transforms.html?highlight=normalize#torchvision.transforms.Normalize) | [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Normalize_cn.html#normalize) | [差异对比](torchvision.transforms.Normalize.md) |
| 10 | [torchvision.transforms.RandomResizedCrop](https://pytorch.org/vision/stable/transforms.html?highlight=randomresizedcrop#torchvision.transforms.RandomResizedCrop) | [paddle.vision.transforms.RandomResizedCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomResizedCrop_cn.html#randomresizedcrop) | 功能一致 |
| 11 | [torchvision.transforms.Pad](https://pytorch.org/vision/stable/transforms.html?highlight=pad#torchvision.transforms.Pad) | [paddle.vision.transforms.Pad](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Pad_cn.html#pad) | 功能一致 |
| 3 | [torchvision.transforms.Resize](https://pytorch.org/vision/stable/transforms.html?highlight=resize#torchvision.transforms.Resize) | [paddle.vision.transforms.Resize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Resize_cn.html#resize) | 功能一致 |
| 4 | [torchvision.transforms.ToTensor](https://pytorch.org/vision/stable/transforms.html?highlight=totensor#torchvision.transforms.ToTensor) | [paddle.vision.transforms.ToTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/ToTensor_cn.html#totensor) | 功能一致 |
| 5 | [torchvision.transforms.RandomHorizontalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomhorizontalflip#torchvision.transforms.RandomHorizontalFlip) | [paddle.vision.transforms.RandomHorizontalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomHorizontalFlip_cn.html#randomhorizontalflip) | 功能一致 |
| 6 | [torchvision.transforms.CenterCrop](https://pytorch.org/vision/stable/transforms.html?highlight=centercrop#torchvision.transforms.CenterCrop) | [paddle.vision.transforms.CenterCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/CenterCrop_cn.html#centercrop) | 功能一致 |
| 7 | [torchvision.transforms.ColorJitter](https://pytorch.org/vision/stable/transforms.html?highlight=colorjitter#torchvision.transforms.ColorJitter) | [paddle.vision.transforms.ColorJitter](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/ColorJitter_cn.html#colorjitter) | 功能一致 |
| 8 | [torchvision.transforms.Grayscale](https://pytorch.org/vision/stable/transforms.html?highlight=grayscale#torchvision.transforms.Grayscale) | [paddle.vision.transforms.Grayscale](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Grayscale_cn.html#grayscale) | 功能一致 |
| 9 | [torchvision.transforms.Normalize](https://pytorch.org/vision/stable/transforms.html?highlight=normalize#torchvision.transforms.Normalize) | [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Normalize_cn.html#normalize) | [差异对比](torchvision.transforms.Normalize.md) |
| 10 | [torchvision.transforms.RandomResizedCrop](https://pytorch.org/vision/stable/transforms.html?highlight=randomresizedcrop#torchvision.transforms.RandomResizedCrop) | [paddle.vision.transforms.RandomResizedCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomResizedCrop_cn.html#randomresizedcrop) | 功能一致 |
| 11 | [torchvision.transforms.Pad](https://pytorch.org/vision/stable/transforms.html?highlight=pad#torchvision.transforms.Pad) | [paddle.vision.transforms.Pad](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Pad_cn.html#pad) | 功能一致 |
| 12 | [torchvision.transforms.RandomCrop](https://pytorch.org/vision/stable/transforms.html?highlight=randomcrop#torchvision.transforms.RandomCrop) | [paddle.vision.transforms.RandomCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomCrop_cn.html#randomcrop) | 功能一致 |
| 13 | [torchvision.transforms.RandomRotation](https://pytorch.org/vision/stable/transforms.html?highlight=randomrotation#torchvision.transforms.RandomRotation) | [paddle.vision.transforms.RandomRotation](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomRotation_cn.html#daimashili) | 功能一致 |
| 14 | [torchvision.transforms.RandomVerticalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomverticalflip#torchvision.transforms.RandomVerticalFlip) | [paddle.vision.transforms.RandomVerticalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomVerticalFlip_cn.html#randomverticalflip) | 功能一致 |
| 13 | [torchvision.transforms.RandomRotation](https://pytorch.org/vision/stable/transforms.html?highlight=randomrotation#torchvision.transforms.RandomRotation) | [paddle.vision.transforms.RandomRotation](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomCrop_cn.html#randomcrop) | 功能一致 |
| 14 | [torchvision.transforms.RandomVerticalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomverticalflip#torchvision.transforms.RandomVerticalFlip) | [paddle.vision.transforms.RandomVerticalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomVerticalFlip_cn.html#randomverticalflip) | 功能一致 |
| 15 | [torchvision.transforms.Lambda](https://pytorch.org/vision/stable/transforms.html?highlight=lambda#torchvision.transforms.Lambda) | 无对应实现 | [组合实现](torchvision.transforms.Lambda.md) |
| 17 | [torchvision.utils.save_image](https://pytorch.org/vision/stable/utils.html?highlight=save_image#torchvision.utils.save_image) | 无对应实现 | [组合实现](torchvision.utils.save_image.md) |
| 18 | [torchvision.models 系列模型](https://pytorch.org/vision/stable/models.html?highlight=torchvision%20models) | X2Paddle提供 | [使用方式](torchvision.models.md) |
......
......@@ -4,7 +4,7 @@
torchvision.transforms.Normalize(mean, std, inplace=False)
```
### [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Normalize_cn.html#normalize)
### [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Normalize_cn.html#normalize)
```python
paddle.vision.transforms.Normalize(mean=0.0, std=1.0, data_format='CHW', to_rgb=False, keys=None)
```
......
......@@ -70,12 +70,6 @@ def arg_parser():
action="store_true",
default=False,
help="define input shape for tf model")
parser.add_argument(
"--paddle_type",
"-pt",
type=_text_type,
default="dygraph",
help="define the paddle model type after converting(dygraph/static)")
parser.add_argument(
"--convert_torch_project",
"-tp",
......@@ -97,10 +91,7 @@ def arg_parser():
return parser
def tf2paddle(model_path,
save_dir,
define_input_shape=False,
paddle_type="dygraph"):
def tf2paddle(model_path, save_dir, define_input_shape=False):
# check tensorflow installation and version
try:
import os
......@@ -119,32 +110,21 @@ def tf2paddle(model_path,
return
from x2paddle.decoder.tf_decoder import TFDecoder
if paddle_type == "dygraph":
from x2paddle.op_mapper.dygraph.tf2paddle.tf_op_mapper import TFOpMapper
else:
from x2paddle.op_mapper.static.tf2paddle.tf_op_mapper import TFOpMapper
from x2paddle.op_mapper.tf2paddle.tf_op_mapper import TFOpMapper
print("Now translating model from tensorflow to paddle.")
model = TFDecoder(model_path, define_input_shape=define_input_shape)
mapper = TFOpMapper(model)
mapper.paddle_graph.build()
if paddle_type == "dygraph":
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="tf", paddle_type=paddle_type)
graph_opt.optimize(mapper.paddle_graph)
else:
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="tf", paddle_type=paddle_type)
graph_opt = GraphOptimizer(source_frame="tf")
graph_opt.optimize(mapper.paddle_graph)
mapper.paddle_graph.gen_model(save_dir)
def caffe2paddle(proto, weight, save_dir, caffe_proto, paddle_type):
def caffe2paddle(proto, weight, save_dir, caffe_proto):
from x2paddle.decoder.caffe_decoder import CaffeDecoder
if paddle_type == "dygraph":
from x2paddle.op_mapper.dygraph.caffe2paddle.caffe_op_mapper import CaffeOpMapper
else:
from x2paddle.op_mapper.static.caffe2paddle.caffe_op_mapper import CaffeOpMapper
from x2paddle.op_mapper.caffe2paddle.caffe_op_mapper import CaffeOpMapper
import google.protobuf as gpb
ver_part = gpb.__version__.split('.')
version_satisfy = False
......@@ -158,13 +138,13 @@ def caffe2paddle(proto, weight, save_dir, caffe_proto, paddle_type):
mapper.paddle_graph.build()
print("Model optimizing ...")
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="caffe", paddle_type=paddle_type)
graph_opt = GraphOptimizer(source_frame="caffe")
graph_opt.optimize(mapper.paddle_graph)
print("Model optimized.")
mapper.paddle_graph.gen_model(save_dir)
def onnx2paddle(model_path, save_dir, paddle_type):
def onnx2paddle(model_path, save_dir):
# check onnx installation and version
try:
import onnx
......@@ -178,10 +158,7 @@ def onnx2paddle(model_path, save_dir, paddle_type):
print("Now translating model from onnx to paddle.")
from x2paddle.decoder.onnx_decoder import ONNXDecoder
if paddle_type == "dygraph":
from x2paddle.op_mapper.dygraph.onnx2paddle.onnx_op_mapper import ONNXOpMapper
else:
from x2paddle.op_mapper.static.onnx2paddle.onnx_op_mapper import ONNXOpMapper
from x2paddle.op_mapper.onnx2paddle.onnx_op_mapper import ONNXOpMapper
model = ONNXDecoder(model_path)
mapper = ONNXOpMapper(model)
mapper.paddle_graph.build()
......@@ -206,7 +183,7 @@ def pytorch2paddle(module, save_dir, jit_type="trace", input_examples=None):
print("Now translating model from pytorch to paddle.")
from x2paddle.decoder.pytorch_decoder import ScriptDecoder, TraceDecoder
from x2paddle.op_mapper.dygraph.pytorch2paddle.pytorch_op_mapper import PyTorchOpMapper
from x2paddle.op_mapper.pytorch2paddle.pytorch_op_mapper import PyTorchOpMapper
if jit_type == "trace":
model = TraceDecoder(module, input_examples)
......@@ -216,8 +193,7 @@ def pytorch2paddle(module, save_dir, jit_type="trace", input_examples=None):
mapper.paddle_graph.build()
print("Model optimizing ...")
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(
source_frame="pytorch", paddle_type="dygraph", jit_type=jit_type)
graph_opt = GraphOptimizer(source_frame="pytorch", jit_type=jit_type)
graph_opt.optimize(mapper.paddle_graph)
print("Model optimized.")
mapper.paddle_graph.gen_model(save_dir, jit_type=jit_type)
......@@ -242,8 +218,6 @@ def main():
if not args.convert_torch_project:
assert args.framework is not None, "--framework is not defined(support tensorflow/caffe/onnx)"
assert args.save_dir is not None, "--save_dir is not defined"
assert args.paddle_type in ["dygraph", "static"
], "--paddle_type must be 'dygraph' or 'static'"
try:
import platform
......@@ -274,16 +248,15 @@ def main():
define_input_shape = False
if args.define_input_shape:
define_input_shape = True
tf2paddle(args.model, args.save_dir, define_input_shape,
args.paddle_type)
tf2paddle(args.model, args.save_dir, define_input_shape)
elif args.framework == "caffe":
assert args.prototxt is not None and args.weight is not None, "--prototxt and --weight should be defined while translating caffe model"
caffe2paddle(args.prototxt, args.weight, args.save_dir,
args.caffe_proto, args.paddle_type)
args.caffe_proto)
elif args.framework == "onnx":
assert args.model is not None, "--model should be defined while translating onnx model"
onnx2paddle(args.model, args.save_dir, args.paddle_type)
onnx2paddle(args.model, args.save_dir)
elif args.framework == "paddle2onnx":
print(
"Paddle to ONNX tool has been migrated to the new github: https://github.com/PaddlePaddle/paddle2onnx"
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from x2paddle.core.graph import GraphNode
from x2paddle.core.util import *
import collections
import six
class Layer(object):
def __init__(self):
self.op = None
self.param_attr = dict()
self.inputs = dict()
self.output = None
self.is_custom_layer = False
self.use_fluid = False
def get_code(self):
layer_code = ""
if self.output is not None:
if isinstance(self.output, six.string_types):
layer_code = self.output + " = "
else:
layer_code = self.output.layer_name + " = "
if self.is_custom_layer:
layer_code = layer_code + self.op + "("
elif self.op == "=":
layer_code = layer_code
elif self.use_fluid:
layer_code = layer_code + "fluid." + self.op + "("
elif self.op == "full_like":
layer_code = layer_code + "paddle." + self.op + "("
else:
layer_code = layer_code + "fluid.layers." + self.op + "("
if isinstance(self.inputs, list):
in_list = "["
for input in self.inputs:
if isinstance(input, GraphNode):
if hasattr(input, "index"):
in_list += (
input.layer_name + "[{}]".format(input.index) + ", "
)
else:
in_list += (input.layer_name + ", ")
elif isinstance(input, six.string_types):
in_list += (input + ", ")
else:
raise Exception(
"Element of inputs should GraphNode or String")
in_list = in_list.strip(", ") + "], "
layer_code += in_list
elif isinstance(self.inputs, dict):
inputs = collections.OrderedDict(self.inputs)
for key, input in inputs.items():
if isinstance(input, GraphNode):
if hasattr(input, "index"):
layer_code = layer_code + key + "={}, ".format(
input.layer_name + "[{}]".format(input.index))
else:
layer_code = layer_code + key + "={}, ".format(
input.layer_name)
else:
layer_code = layer_code + key + "={}, ".format(input)
elif isinstance(self.inputs, GraphNode):
if hasattr(self.inputs, "index"):
layer_code += (
self.inputs.layer_name + "[{}]".format(self.inputs.index))
else:
layer_code += (self.inputs.layer_name)
if self.op != "=":
layer_code += ", "
elif isinstance(self.inputs, six.string_types):
layer_code += (self.inputs)
if self.op != "=":
layer_code += ", "
else:
raise Exception("Unknown type of inputs.")
param_attr = collections.OrderedDict(self.param_attr)
for key, value in param_attr.items():
if '\n' in str(value):
value = string(str(value).replace('\n', ','))
if str(key) == 'attr':
value = 'ParamAttr(' + str(value) + ')'
layer_code = layer_code + key + "={}, ".format(value)
layer_code = layer_code.strip(", ")
if self.op != "=":
layer_code += ")"
return layer_code
class FluidCode(object):
def __init__(self):
self.layers = list()
def add_layer(self,
op,
inputs,
output,
param_attr=None,
use_fluid=False,
is_custom_layer=False):
layer = Layer()
layer.op = op
layer.use_fluid = use_fluid
layer.is_custom_layer = is_custom_layer
if inputs is not None:
layer.inputs = inputs
layer.output = output
if param_attr is not None:
layer.param_attr = param_attr
self.layers.append(layer)
def add_note(self, note):
# note should be string
self.layers.append(note)
def clear(self):
self.layers = list()
def gen_codes(self):
codes = list()
for layer in self.layers:
if isinstance(layer, Layer):
codes.append(layer.get_code())
elif isinstance(layer, six.string_types):
codes.append(layer)
return codes
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
from paddle.fluid.proto import framework_pb2
from x2paddle.core.util import *
import inspect
import os
def export_paddle_param(param, param_name, dir):
dtype_map = {
"int16": [framework_pb2.VarType.INT16, 'h'],
"int32": [framework_pb2.VarType.INT32, 'i'],
"int64": [framework_pb2.VarType.INT64, 'q'],
"float16": [framework_pb2.VarType.FP16, 'e'],
"float32": [framework_pb2.VarType.FP32, 'f'],
"float64": [framework_pb2.VarType.FP64, 'd'],
"bool": [framework_pb2.VarType.BOOL, None]
}
shape = param.shape
if str(param.dtype) in ['uint8', 'uint_8', 'bool']:
param = param.astype('int64')
if len(shape) == 0:
assert param.size == 1, "Unexpected situation happend!"
shape = [1]
assert str(
param.dtype) in dtype_map, "Unknown dtype {} of params: {}.".format(
str(param.dtype), param_name)
fp = open(os.path.join(dir, param_name), 'wb')
numpy.array([0], dtype='int32').tofile(fp)
numpy.array([0], dtype='int64').tofile(fp)
numpy.array([0], dtype='int32').tofile(fp)
tensor_desc = framework_pb2.VarType.TensorDesc()
tensor_desc.data_type = dtype_map[str(param.dtype)][0]
tensor_desc.dims.extend(shape)
desc_size = tensor_desc.ByteSize()
numpy.array([desc_size], dtype='int32').tofile(fp)
fp.write(tensor_desc.SerializeToString())
param.tofile(fp)
fp.close()
# This func will copy to generate code file
def run_net(param_dir="./"):
import os
inputs, outputs = x2paddle_net()
ops = fluid.default_main_program().global_block().ops
used_vars = list()
for op in ops:
used_vars += op.input_arg_names
tmp = list()
for input in inputs:
if isinstance(input, list):
for ipt in input:
if ipt.name not in used_vars:
continue
tmp.append(ipt)
else:
if input.name not in used_vars:
continue
tmp.append(input)
inputs = tmp
for i, out in enumerate(outputs):
if isinstance(out, list):
for out_part in out:
outputs.append(out_part)
del outputs[i]
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
def if_exist(var):
b = os.path.exists(os.path.join(param_dir, var.name))
return b
fluid.io.load_vars(
exe, param_dir, fluid.default_main_program(), predicate=if_exist)
class OpMapper(object):
def __init__(self):
self.paddle_codes = ""
self.tab = " "
self.net_code = list()
self.weights = dict()
self.inputs = list()
self.outputs = list()
def op_checker(self):
unsupported_ops = set()
for node_name in self.graph.topo_sort:
node = self.graph.get_node(node_name)
op = node.layer_type
if not hasattr(self, op):
unsupported_ops.add(op)
if len(unsupported_ops) == 0:
return True
else:
print("There are {} ops not supported yet, list as below".format(
len(unsupported_ops)))
for op in unsupported_ops:
print(op)
return False
def add_codes(self, codes, indent=0):
if isinstance(codes, list):
for code in codes:
self.paddle_codes += (
self.tab * indent + code.strip('\n') + '\n')
elif isinstance(codes, str):
self.paddle_codes += (self.tab * indent + codes.strip('\n') + '\n')
else:
raise Exception("Unknown type of codes")
def add_heads(self):
self.add_codes("from paddle.fluid.initializer import Constant")
self.add_codes("from paddle.fluid.param_attr import ParamAttr")
self.add_codes("import paddle.fluid as fluid")
self.add_codes("import paddle")
self.add_codes("")
def save_inference_model(self, save_dir, params_merge):
self.save_python_model(save_dir)
import sys
import paddle.fluid as fluid
py_code_dir = os.path.join(save_dir, "model_with_code")
sys.path.append(py_code_dir)
import model
try:
inputs, outputs = model.x2paddle_net()
ops = fluid.default_main_program().global_block().ops
used_vars = list()
for op in ops:
used_vars += op.input_arg_names
for i, out in enumerate(outputs):
if isinstance(out, list):
for out_part in out:
outputs.append(out_part)
del outputs[i]
input_names = list()
for input in inputs:
if isinstance(input, list):
for ipt in input:
if ipt.name not in used_vars:
continue
input_names.append(ipt.name)
else:
if input.name not in used_vars:
continue
input_names.append(input.name)
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
def if_exist(var):
b = os.path.exists(
os.path.join(os.path.join(py_code_dir, var.name)))
return b
fluid.io.load_vars(
exe,
py_code_dir,
fluid.default_main_program(),
predicate=if_exist)
if params_merge:
fluid.io.save_inference_model(
dirname=os.path.join(save_dir, "inference_model"),
feeded_var_names=input_names,
target_vars=outputs,
executor=exe,
params_filename="__params__")
else:
fluid.io.save_inference_model(
dirname=os.path.join(save_dir, "inference_model"),
feeded_var_names=input_names,
target_vars=outputs,
executor=exe,
params_filename=None)
except:
raise Exception(
"Paddle code was saved in {}/model.py, but seems there's wrong exist, please check model.py manually."
.format(py_code_dir))
def save_python_model(self, save_dir):
if not os.path.exists(save_dir):
os.makedirs(save_dir)
py_code_dir = os.path.join(save_dir, "model_with_code")
if not os.path.exists(py_code_dir):
os.makedirs(py_code_dir)
for name, param in self.weights.items():
export_paddle_param(param, name, py_code_dir)
self.add_heads()
if hasattr(self, "used_custom_layers"):
for _, layer_code in self.used_custom_layers.items():
self.add_codes(layer_code, 0)
self.add_codes("", 0)
self.add_codes("\ndef x2paddle_net():", 0)
self.add_codes("paddle.enable_static()", 1)
for i in range(len(self.graph.topo_sort)):
node_name = self.graph.topo_sort[i]
node = self.graph.get_node(node_name)
if node is None:
continue
if len(node.fluid_code.layers) == 0:
continue
self.add_codes(node.fluid_code.gen_codes(), 1)
self.add_codes("", 0)
input_str = "["
for name in self.graph.input_nodes:
input_str += (name + ", ")
input_str = input_str.strip(", ") + "]"
output_str = "["
for name in self.graph.output_nodes:
output_str += (name + ", ")
output_str = output_str.strip(", ") + "]"
return_code = "return {}, {}".format(input_str, output_str)
self.add_codes(return_code, 1)
self.add_codes("", 0)
self.add_codes(inspect.getsourcelines(run_net)[0])
fp = open(os.path.join(py_code_dir, "model.py"), 'w')
fp.write(self.paddle_codes)
fp.close()
此差异已折叠。
......@@ -17,7 +17,6 @@ import sys
from google.protobuf import text_format
import numpy as np
from x2paddle.core.graph import GraphNode, Graph
from x2paddle.core.fluid_code import FluidCode
from x2paddle.decoder import caffe_shape_inference
......@@ -55,7 +54,6 @@ class CaffeGraphNode(GraphNode):
super(CaffeGraphNode, self).__init__(
layer, layer_name.replace('/', '_').replace('-', '_').lower())
self.layer_type = type_str
self.fluid_code = FluidCode()
self.data = None
def set_params(self, params):
......@@ -258,7 +256,8 @@ class CaffeGraph(Graph):
assert input_node_name in self.node_map, 'The {} isn\'t a valid node'.format(
name)
input_node = self.node_map[input_node_name]
if len(input_node.layer.top) > 1 and input_node.layer_type not in ["Input", "MemoryData"]:
if len(input_node.layer.top
) > 1 and input_node.layer_type not in ["Input", "MemoryData"]:
need_idx = list(input_node.layer.top).index(node.layer.bottom[idx])
name = input_node_name + ':' + str(need_idx)
else:
......
......@@ -13,7 +13,6 @@
# limitations under the License.
from x2paddle.core.graph import GraphNode, Graph
from x2paddle.core.fluid_code import FluidCode
from x2paddle.decoder.onnx_shape_inference import SymbolicShapeInference
from onnx.checker import ValidationError
from onnx.checker import check_model
......@@ -44,7 +43,6 @@ class ONNXGraphNode(GraphNode):
else:
super(ONNXGraphNode, self).__init__(layer, layer_name)
self.layer_type = layer.op_type
self.fluid_code = FluidCode()
self.attr_map = self.get_attr_map()
self.out_shapes = list()
self.dtype = None
......@@ -97,8 +95,10 @@ class ONNXGraphNode(GraphNode):
return self.attr_map[name]
def output(self, index=0):
if index >0 and len(self.layer.output) <= index:
raise IndexError('Output numbers of Node:{} is {} <= index:{}'.format(self.layer_name, len(self.layer.output), index))
if index > 0 and len(self.layer.output) <= index:
raise IndexError(
'Output numbers of Node:{} is {} <= index:{}'.format(
self.layer_name, len(self.layer.output), index))
return self.layer.output[index]
......@@ -113,7 +113,6 @@ class ONNXGraphDataNode(GraphNode):
else:
self.layer_type = 'create_parameter'
self.layer_name = layer_name
self.fluid_code = FluidCode()
self.weight = None
self.embeded_as = None
self.which_child = {}
......@@ -320,7 +319,8 @@ class ONNXGraph(Graph):
if first_i == n_i:
continue
if n_ipt == nd.name:
new_nd_name = "{}/{}".format(nd.name, n_i)
new_nd_name = "{}/{}".format(nd.name,
n_i)
if new_nd_name not in node.which_child:
node.which_child[new_nd_name] = idx
break
......@@ -353,7 +353,6 @@ class ONNXGraph(Graph):
return ipt_node
def graph_weights(self):
"""
generator for weights
......
......@@ -13,7 +13,6 @@
# limitations under the License.
from x2paddle.core.graph import GraphNode, Graph
from x2paddle.core.fluid_code import FluidCode
from tensorflow.python.framework import tensor_util
from tensorflow.core.framework import attr_value_pb2
import tensorflow as tf
......@@ -36,7 +35,6 @@ class TFGraphNode(GraphNode):
self.layer_type = layer.op
self.tf_data_format = data_format
self.pd_data_format = "NCHW"
self.fluid_code = FluidCode()
self.dtype_map = {
1: "float32",
......@@ -175,7 +173,6 @@ class TFGraph(Graph):
self._remove_identity_node()
self._remove_cast_node()
def get_node(self, node_name, copy=False):
items = node_name.strip().split(':')
items[0] = items[0].replace('/', '_').replace('-', '_')
......@@ -290,7 +287,6 @@ class TFGraph(Graph):
else:
self.output_nodes[idx] = input_node.layer_name
def _remove_cast_node(self):
cast_node = list()
for node_name, node in self.node_map.items():
......@@ -466,13 +462,15 @@ class TFDecoder(object):
for b in batch_size:
for input_name, info in self.inputs_info.items():
(shape, dtype) = cp.deepcopy(info)
input_tensor = self.sess.graph.get_tensor_by_name(input_name + ":0")
input_tensor = self.sess.graph.get_tensor_by_name(input_name +
":0")
if shape.count(-1) > 0:
shape[shape.index(-1)] = b
feed[input_tensor] = numpy.random.random_sample(shape)
output_tensor = self.sess.graph.get_tensor_by_name(tensor_name)
if use_diff_inputs:
results.append(self.sess.run([output_tensor], feed)[0].flatten())
results.append(
self.sess.run([output_tensor], feed)[0].flatten())
else:
return self.sess.run([output_tensor], feed)[0]
......@@ -499,4 +497,3 @@ class TFDecoder(object):
return results[0].tolist()
else:
raise Exception("Couldn't infer a stable shape shape tensor value")
\ No newline at end of file
......@@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from .detectionoutput import DetectionOutput
from .normalize import Normalize
from .priorbox import PriorBox
......
......@@ -15,15 +15,18 @@
import paddle
import paddle.fluid as fluid
class DetectionOutput(object):
def __init__(self, nms_threshold, nms_top_k, keep_top_k, nms_eta, score_threshold, background_label):
def __init__(self, nms_threshold, nms_top_k, keep_top_k, nms_eta,
score_threshold, background_label):
self.detection_output_layer_attrs = {
"background_label": background_label,
"nms_threshold": nms_threshold,
"nms_top_k": nms_top_k,
"keep_top_k": keep_top_k,
"score_threshold": score_threshold,
"nms_eta": nms_eta}
"nms_eta": nms_eta
}
def __call__(self, x0, x1, x2):
priorbox_list = paddle.split(x2, num_or_sections=2, axis=1)
......@@ -34,11 +37,10 @@ class DetectionOutput(object):
pb_dim = fluid.layers.shape(pb)[0]
loc = paddle.reshape(x0, shape=[-1, pb_dim, 4])
conf_flatten = paddle.reshape(x1, shape=[0, pb_dim, -1])
out = fluid.layers.detection_output(loc=loc,
out = fluid.layers.detection_output(
loc=loc,
scores=conf_flatten,
prior_box=pb,
prior_box_var=pbv,
**self.detection_output_layer_attrs)
return out
\ No newline at end of file
......@@ -15,6 +15,7 @@
import paddle
import paddle.fluid as fluid
class Normalize(object):
def __init__(self, axis):
self.axis = axis
......@@ -32,5 +33,3 @@ class Normalize(object):
perm.insert(self.axis, dim)
out = paddle.transpose(out, perm=perm)
return out
\ No newline at end of file
......@@ -15,11 +15,10 @@
import paddle
import paddle.fluid as fluid
class PriorBox(object):
def __init__(self, min_sizes, max_sizes,
aspect_ratios, variance, flip,
clip, steps, offset,
min_max_aspect_ratios_order):
def __init__(self, min_sizes, max_sizes, aspect_ratios, variance, flip,
clip, steps, offset, min_max_aspect_ratios_order):
self.priorbox_layer_attrs = {
"min_sizes": min_sizes,
"max_sizes": max_sizes,
......@@ -29,12 +28,12 @@ class PriorBox(object):
"clip": clip,
"steps": steps,
"offset": offset,
"min_max_aspect_ratios_order": min_max_aspect_ratios_order}
"min_max_aspect_ratios_order": min_max_aspect_ratios_order
}
def __call__(self, x0, x1):
box, var = fluid.layers.prior_box(input=x0,
image=x1,
**self.priorbox_layer_attrs)
box, var = fluid.layers.prior_box(
input=x0, image=x1, **self.priorbox_layer_attrs)
box = paddle.reshape(x=box, shape=[1, 1, -1])
var = paddle.reshape(x=var, shape=[1, 1, -1])
out = paddle.concat(x=[box, var], axis=1)
......
......@@ -15,19 +15,17 @@
import paddle
import paddle.fluid as fluid
class ROIPooling(object):
def __init__(self, pooled_height, pooled_width, spatial_scale):
self.roipooling_layer_attrs = {
"pooled_height": pooled_height,
"pooled_width": pooled_width,
"spatial_scale": spatial_scale}
"spatial_scale": spatial_scale
}
def __call__(self, x0, x1):
slice_x1 = paddle.slice(input=x1, axes=[1],
starts=[1], ends=[5])
out = fluid.layers.roi_pool(input=x0,
rois=slice_x1,
**self.roipooling_layer_attrs)
slice_x1 = paddle.slice(input=x1, axes=[1], starts=[1], ends=[5])
out = fluid.layers.roi_pool(
input=x0, rois=slice_x1, **self.roipooling_layer_attrs)
return out
\ No newline at end of file
......@@ -15,6 +15,7 @@
import paddle
import paddle.fluid as fluid
class Select(object):
def __init__(self, input_shape, point, axis):
self.point = point
......@@ -27,9 +28,5 @@ class Select(object):
end = self.point[1]
else:
end = self.input_shape[self.axis]
out = paddle.slice(x=x,
start=start,
end=end,
axes=[self.axis])
out = paddle.slice(x=x, start=start, end=end, axes=[self.axis])
return out
\ No newline at end of file
......@@ -15,7 +15,6 @@
import sys
import numbers
import numpy as np
from x2paddle.core.op_mapper import OpMapper
from x2paddle.core.util import *
from x2paddle.core.program import PaddleGraph
from x2paddle.decoder.caffe_decoder import CaffeGraphNode
......@@ -57,14 +56,16 @@ def _adjust_parameters(node):
shape_new = data[idx].shape
return data
def _get_kernel_parameters(kind, params):
assert kind in ["Convolution", "Pooling", "Deconvolution", "ConvolutionDepthwise"]
assert kind in [
"Convolution", "Pooling", "Deconvolution", "ConvolutionDepthwise"
]
[k_h, k_w] = [1, 1]
if isinstance(params.kernel_size, numbers.Number):
[k_h, k_w] = [params.kernel_size] * 2
elif len(params.kernel_size) > 0:
k_h = params.kernel_h if params.kernel_h > 0 else params.kernel_size[
0]
k_h = params.kernel_h if params.kernel_h > 0 else params.kernel_size[0]
k_w = params.kernel_w if params.kernel_w > 0 else params.kernel_size[
len(params.kernel_size) - 1]
elif params.kernel_h > 0 or params.kernel_w > 0:
......@@ -85,8 +86,8 @@ def _get_kernel_parameters(kind, params):
[p_h, p_w] = [params.pad] * 2
elif len(params.pad) > 0:
p_h = params.pad_h if params.pad_h > 0 else params.pad[0]
p_w = params.pad_w if params.pad_w > 0 else params.pad[len(
params.pad) - 1]
p_w = params.pad_w if params.pad_w > 0 else params.pad[len(params.pad) -
1]
elif params.pad_h > 0 or params.pad_w > 0:
p_h = params.pad_h
p_w = params.pad_w
......@@ -114,19 +115,18 @@ def _get_kernel_parameters(kind, params):
return c_o, kernel, stride, pad, dilation, group
class CaffeOpMapper(OpMapper):
class CaffeOpMapper():
directly_map_ops = {
'Sigmoid': ['paddle.nn.layer.Sigmoid'],
'TanH': ['paddle.nn.Tanh'],
}
def __init__(self, decoder):
super(CaffeOpMapper, self).__init__()
self.graph = decoder.caffe_graph
if not self.op_checker():
raise Exception("Model is not supported yet.")
self.params = dict()
self.paddle_graph = PaddleGraph(parent_layer=None, graph_type="dygraph", source_type="caffe")
self.paddle_graph = PaddleGraph(parent_layer=None, source_type="caffe")
self.paddle_graph.outputs = self.graph.output_nodes
self.input_index = 0
self.inputs_info = {}
......@@ -162,8 +162,8 @@ class CaffeOpMapper(OpMapper):
return True
else:
if len(unsupported_ops) > 0:
print("\n========= {} OPs are not supported yet ===========".format(
len(unsupported_ops)))
print("\n========= {} OPs are not supported yet ===========".
format(len(unsupported_ops)))
for op in unsupported_ops:
print("========== {} ============".format(op))
return False
......@@ -185,8 +185,7 @@ class CaffeOpMapper(OpMapper):
outputs=layer_outputs)
else:
self.paddle_graph.add_layer(
kernel=paddle_op,
inputs={"x": input.name},
kernel=paddle_op, inputs={"x": input.name},
outputs=[node.name])
def Input(self, node):
......@@ -196,7 +195,8 @@ class CaffeOpMapper(OpMapper):
outputs=[node.layer_name],
data="x{}".format(self.input_index))
shape = list(node.layer.input_param.shape[0].dim)[1:]
self.inputs_info["x{}".format(self.input_index)] = [[-1] + shape, "float32"]
self.inputs_info["x{}".format(self.input_index)] = [[-1] + shape,
"float32"]
self.input_index += 1
def MemoryData(self, node):
......@@ -233,8 +233,9 @@ class CaffeOpMapper(OpMapper):
"The parameter of {} (type is {}) is not set. So we set the parameters as 0"
.format(node.layer_name, node.layer_type))
data.append(
np.zeros([out_channel, node.in_shapes[0][1], kernel[0], kernel[1]]).astype(
'float32'))
np.zeros([
out_channel, node.in_shapes[0][1], kernel[0], kernel[1]
]).astype('float32'))
data.append(np.zeros([out_channel, ]).astype('float32'))
else:
data = _adjust_parameters(node)
......@@ -279,8 +280,9 @@ class CaffeOpMapper(OpMapper):
"The parameter of {} (type is {}) is not set. So we set the parameters as 0"
.format(node.layer_name, node.layer_type))
data.append(
np.zeros([out_channel, node.in_shapes[0][1], kernel[0], kernel[1]]).astype(
'float32'))
np.zeros([
out_channel, node.in_shapes[0][1], kernel[0], kernel[1]
]).astype('float32'))
data.append(np.zeros([out_channel, ]).astype('float32'))
else:
data = _adjust_parameters(node)
......@@ -315,18 +317,21 @@ class CaffeOpMapper(OpMapper):
params = node.layer.convolution_param
out_channel, kernel, stride, pad, dilation, group = _get_kernel_parameters(
node.layer_type, params)
out_channel = params.num_output if params.num_output is not None else node.in_shapes[0][1]
out_channel = params.num_output if params.num_output is not None else node.in_shapes[
0][1]
in_channel = node.in_shapes[0][1]
group = int(in_channel / (in_channel / out_channel)) if in_channel > out_channel else int(in_channel /
(out_channel / in_channel))
group = int(in_channel / (
in_channel / out_channel)) if in_channel > out_channel else int(
in_channel / (out_channel / in_channel))
if data is None:
data = []
print(
"The parameter of {} (type is {}) is not set. So we set the parameters as 0"
.format(node.layer_name, node.layer_type))
data.append(
np.zeros([out_channel, node.in_shapes[0][1], kernel[0], kernel[1]]).astype(
'float32'))
np.zeros([
out_channel, node.in_shapes[0][1], kernel[0], kernel[1]
]).astype('float32'))
data.append(np.zeros([out_channel, ]).astype('float32'))
else:
data = _adjust_parameters(node)
......@@ -428,7 +433,6 @@ class CaffeOpMapper(OpMapper):
outputs=[node.layer_name],
**layer_attrs)
def InnerProduct(self, node):
linear_name = name_generator("linear", self.nn_name2id)
output_name = node.layer_name
......@@ -442,10 +446,11 @@ class CaffeOpMapper(OpMapper):
.format(node.layer_name, node.layer_type))
data = []
data.append(
np.zeros([node.in_shapes[0][1], params.num_output]).astype("float32").astype(
"float32"))
np.zeros([node.in_shapes[0][1], params.num_output]).astype(
"float32").astype("float32"))
data.append(
np.zeros([params.num_output]).astype("float32").astype("float32"))
np.zeros([params.num_output]).astype("float32").astype(
"float32"))
else:
data = _adjust_parameters(node)
# Reshape the parameters to Paddle's ordering
......@@ -642,25 +647,21 @@ class CaffeOpMapper(OpMapper):
inputs_dict['x'] = node.layer_name + '_mul0'
inputs_dict['y'] = node.layer_name + '_mul1'
self.paddle_graph.add_layer(
"paddle.add",
inputs=inputs_dict,
"paddle.add", inputs=inputs_dict,
outputs=[node.layer_name])
else:
inputs_dict = {}
inputs_dict['x'] = input0_name
inputs_dict['y'] = input1_name
self.paddle_graph.add_layer(
"paddle.add",
inputs=inputs_dict,
"paddle.add", inputs=inputs_dict,
outputs=[node.layer_name])
else:
inputs_dict = {}
inputs_dict['x'] = input0_name
inputs_dict['y'] = input1_name
self.paddle_graph.add_layer(
"paddle.max",
inputs=inputs_dict,
outputs=[node.layer_name])
"paddle.max", inputs=inputs_dict, outputs=[node.layer_name])
def BatchNorm(self, node):
batchnorm_name = name_generator("batchnorm", self.nn_name2id)
......@@ -702,7 +703,7 @@ class CaffeOpMapper(OpMapper):
"paddle.unsqueeze",
inputs={"x": input.name},
outputs=[input.name],
axis=[2,3])
axis=[2, 3])
self.paddle_graph.add_layer(
"paddle.nn.BatchNorm2D",
inputs={"input": input.name},
......@@ -713,7 +714,7 @@ class CaffeOpMapper(OpMapper):
"paddle.squeeze",
inputs={"x": node.layer_name},
outputs=[node.layer_name],
axis=[2,3])
axis=[2, 3])
def Scale(self, node):
if node.data is None:
......@@ -734,8 +735,8 @@ class CaffeOpMapper(OpMapper):
node.in_shapes[0][1],
]).astype("float32")
else:
self.params[node.layer_name + "_cparam2"] = np.squeeze(node.data[
1]).astype("float32")
self.params[node.layer_name + "_cparam2"] = np.squeeze(
node.data[1]).astype("float32")
params = node.layer.scale_param
axis = params.axis
inputs = []
......@@ -787,9 +788,7 @@ class CaffeOpMapper(OpMapper):
output_shape = node.out_shapes[0]
if axis == -1:
self.paddle_graph.add_layer(
"paddle.add",
inputs=inputs_dict,
outputs=[node.layer_name])
"paddle.add", inputs=inputs_dict, outputs=[node.layer_name])
else:
if axis < 0:
axis = axis + len(output_shape)
......@@ -803,9 +802,7 @@ class CaffeOpMapper(OpMapper):
outputs=[node.layer_name + "_cparam2"],
shape=new_shape)
self.paddle_graph.add_layer(
"paddle.add",
inputs=inputs_dict,
outputs=[node.layer_name])
"paddle.add", inputs=inputs_dict, outputs=[node.layer_name])
def Reshape(self, node):
input = self.graph.get_input_node(node, idx=0, copy=True)
......@@ -816,7 +813,6 @@ class CaffeOpMapper(OpMapper):
outputs=[node.layer_name],
shape=output_shape)
def ArgMax(self, node):
assert len(node.inputs) == 1 and len(
node.outputs
......@@ -834,7 +830,10 @@ class CaffeOpMapper(OpMapper):
self.paddle_graph.add_layer(
"paddle.topk",
inputs={"x": input.name},
outputs=[node.layer_name + "_topk_var", node.layer_name + "_index_var"],
outputs=[
node.layer_name + "_topk_var",
node.layer_name + "_index_var"
],
k=top_k)
self.paddle_graph.add_layer(
"paddle.cast",
......@@ -843,7 +842,12 @@ class CaffeOpMapper(OpMapper):
dtype="{}_topk_var.dtype".format(node.layer_name))
self.paddle_graph.add_layer(
"paddle.concat",
inputs={"x": [node.layer_name + "_topk_var", node.layer_name + "_index_var"]},
inputs={
"x": [
node.layer_name + "_topk_var",
node.layer_name + "_index_var"
]
},
outputs=[node.layer_name],
axis=axis)
else:
......@@ -881,7 +885,6 @@ class CaffeOpMapper(OpMapper):
inputs=inputs_dict,
outputs=[node.layer_name + "_mul"])
def Crop(self, node):
assert len(
node.inputs) == 2, "The count of Crop node\'s input is not 2."
......@@ -1012,11 +1015,13 @@ class CaffeOpMapper(OpMapper):
scale=coeff)
def DetectionOutput(self, node):
detection_output_name = name_generator("detection_output", self.nn_name2id)
detection_output_name = name_generator("detection_output",
self.nn_name2id)
output_name = node.layer_name
layer_outputs = [detection_output_name, output_name]
assert len(
node.inputs) == 3, "The count of DetectionOutput node\'s input is not 3."
node.
inputs) == 3, "The count of DetectionOutput node\'s input is not 3."
inputs_dict = dict()
for i in range(len(node.inputs)):
input = self.graph.get_input_node(node, idx=i, copy=True)
......@@ -1048,7 +1053,8 @@ class CaffeOpMapper(OpMapper):
"nms_top_k": nms_param_dict["top_k"],
"keep_top_k": params.keep_top_k,
"score_threshold": params.confidence_threshold,
"nms_eta": nms_param_dict["eta"]}
"nms_eta": nms_param_dict["eta"]
}
self.paddle_graph.add_layer(
kernel="custom_layer:DetectionOutput",
inputs=inputs_dict,
......@@ -1073,7 +1079,6 @@ class CaffeOpMapper(OpMapper):
else:
self.params[param_name] = _adjust_parameters(node)[0]
self.paddle_graph.add_layer(
"self.create_parameter",
inputs={},
......@@ -1081,8 +1086,7 @@ class CaffeOpMapper(OpMapper):
shape=self.params[param_name].shape,
attr=string(param_name))
inputs_dict = {}
layer_attrs = {
"axis": -1 if params.channel_shared else 1}
layer_attrs = {"axis": -1 if params.channel_shared else 1}
self.paddle_graph.add_layer(
"custom_layer:Normalize",
inputs={"x": input.name,
......@@ -1126,7 +1130,8 @@ class CaffeOpMapper(OpMapper):
"clip": params.clip,
"steps": steps,
"offset": params.offset,
"min_max_aspect_ratios_order": True}
"min_max_aspect_ratios_order": True
}
self.paddle_graph.add_layer(
"custom_layer:PriorBox",
inputs=inputs_dict,
......@@ -1160,7 +1165,8 @@ class CaffeOpMapper(OpMapper):
layer_attrs = {
"pooled_height": params.pooled_h,
"pooled_width": params.pooled_w,
"spatial_scale": params.spatial_scale}
"spatial_scale": params.spatial_scale
}
self.paddle_graph.add_layer(
"custom_layer:ROIPooling",
inputs=inputs_dict,
......@@ -1168,8 +1174,8 @@ class CaffeOpMapper(OpMapper):
**layer_attrs)
def ShuffleChannel(self, node):
assert len(
node.inputs) == 1, "The count of ShuffleChannel node\'s input is not 1."
assert len(node.inputs
) == 1, "The count of ShuffleChannel node\'s input is not 1."
input = self.graph.get_input_node(node, idx=0, copy=True)
params = node.layer.shuffle_channel_param
self.paddle_graph.add_layer(
......@@ -1186,7 +1192,8 @@ class CaffeOpMapper(OpMapper):
layer_attrs = {
"align_corners": False,
"scale_factor": params.scale,
"mode": "nearest"}
"mode": "nearest"
}
self.paddle_graph.add_layer(
"paddle.nn.functional.interpolate",
inputs={"x": input.name},
......@@ -1205,13 +1212,10 @@ class CaffeOpMapper(OpMapper):
layer_attrs = {
"input_shape": input_shape,
"point": params.slice_point,
"axis": params.axis}
"axis": params.axis
}
self.paddle_graph.add_layer(
"custom_layer:Select",
inputs={"x": input.name},
outputs=layer_outputs,
**layer_attrs)
......@@ -14,12 +14,9 @@
import paddle
class LocalResponseNorm(object):
def __init__(self,
size,
alpha=1e-4,
beta=0.75,
k=1.):
def __init__(self, size, alpha=1e-4, beta=0.75, k=1.):
self.size = size
self.alpha = alpha
self.beta = beta
......
......@@ -14,6 +14,7 @@
import paddle
class OneHot(object):
def __init__(self, axis):
self.axis = axis
......@@ -25,11 +26,13 @@ class OneHot(object):
if self.axis < 0:
real_axis = self.axis + rank + 1
depth_range = paddle.arange(end=depth)
ls = tuple(indices_shape[0: real_axis])
rs = tuple(indices_shape[real_axis: rank])
targets = paddle.reshape(depth_range, (1,) * (real_axis-0) + tuple(depth_range.shape) + (1,) * (rank-real_axis))
ls = tuple(indices_shape[0:real_axis])
rs = tuple(indices_shape[real_axis:rank])
targets = paddle.reshape(depth_range, (1, ) *
(real_axis - 0) + tuple(depth_range.shape) +
(1, ) * (rank - real_axis))
mod = paddle.mod(indices, depth)
v = paddle.reshape(mod, ls + (1,) + rs)
v = paddle.reshape(mod, ls + (1, ) + rs)
out = targets == v
out = paddle.cast(out, "float32")
on_value = paddle.slice(values, axes=[0], starts=[1], ends=[2])
......
......@@ -15,6 +15,7 @@
import paddle
from x2paddle.core.util import *
class PadAllDim2(object):
def __init__(self, value, mode):
self.layer_attrs = {}
......@@ -22,7 +23,6 @@ class PadAllDim2(object):
self.layer_attrs['data_format'] = 'NCHW'
self.layer_attrs['value'] = value
def __call__(self, x, pad):
pad = paddle.reshape(pad, shape=[2, -1])
pad = paddle.transpose(pad, perm=[1, 0])
......
......@@ -15,6 +15,7 @@
import paddle
from x2paddle.core.util import *
class PadAllDim4(object):
def __init__(self, value, mode):
self.layer_attrs = {}
......@@ -22,7 +23,6 @@ class PadAllDim4(object):
self.layer_attrs['data_format'] = 'NCHW'
self.layer_attrs['value'] = value
def __call__(self, x, pad):
pad = paddle.reshape(pad, shape=[2, -1])
pad = paddle.transpose(pad, perm=[1, 0])
......
......@@ -15,14 +15,15 @@
import paddle
from x2paddle.core.util import *
class PadAllDim4WithOneInput(object):
def __init__(self, pad, value, mode):
self.layer_attrs = {}
self.layer_attrs['mode'] = mode
self.layer_attrs['data_format'] = 'NCHW'
self.layer_attrs['value'] = value
self.pad1 = pad[0: 4]
self.pad2 = pad[4: 9]
self.pad1 = pad[0:4]
self.pad2 = pad[4:9]
def __call__(self, x):
x = paddle.nn.functional.pad(x=x, pad=self.pad1, **self.layer_attrs)
......
......@@ -15,6 +15,7 @@
import paddle
from x2paddle.core.util import *
class PadWithTwoInput(object):
def __init__(self, value, mode, data_format):
self.layer_attrs = {}
......@@ -22,7 +23,6 @@ class PadWithTwoInput(object):
self.layer_attrs['data_format'] = data_format
self.layer_attrs['value'] = value
def __call__(self, x, pad):
pad = paddle.reshape(pad, shape=[2, -1])
pad = paddle.transpose(pad, perm=[1, 0])
......
......@@ -13,19 +13,17 @@
# limitations under the License.
import sys
from x2paddle.op_mapper.dygraph.onnx2paddle.opset9 import OpSet9
from x2paddle.core.op_mapper import OpMapper
from x2paddle.op_mapper.onnx2paddle.opset9 import OpSet9
from x2paddle.decoder.onnx_decoder import ONNXGraphNode
from x2paddle.core.program import PaddleGraph
class ONNXOpMapper(OpMapper):
class ONNXOpMapper():
def __init__(self, decoder):
super(ONNXOpMapper, self).__init__()
self.support_op_sets = [9, ]
self.default_op_set = 9
self.graph = decoder.graph
self.paddle_graph = PaddleGraph(parent_layer=None, graph_type="dygraph", source_type="onnx")
self.paddle_graph = PaddleGraph(parent_layer=None, source_type="onnx")
self.paddle_graph.outputs = self.graph.output_nodes
self.opset = self.create_opset(decoder)
if not self.op_checker():
......@@ -53,7 +51,6 @@ class ONNXOpMapper(OpMapper):
self.paddle_graph.set_parameters(self.opset.weights)
self.paddle_graph.set_inputs_info(self.opset.inputs_info)
def op_checker(self):
unsupported_ops = set()
for node_name in self.graph.topo_sort:
......@@ -67,8 +64,8 @@ class ONNXOpMapper(OpMapper):
return True
else:
if len(unsupported_ops) > 0:
print("\n========= {} OPs are not supported yet ===========".format(
len(unsupported_ops)))
print("\n========= {} OPs are not supported yet ===========".
format(len(unsupported_ops)))
for op in unsupported_ops:
print("========== {} ============".format(op))
return False
......
......@@ -411,7 +411,6 @@ class OpSet9():
pooled_width = node.get_attr('output_width')
spatial_scale = node.get_attr('spatial_scale')
sampling_ratio = node.get_attr('sampling_ratio')
#dygraph rois_num is necessary
val_rois_shape = val_rois.name + '_shape'
self.paddle_graph.add_layer(
kernel="paddle.shape",
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册