未验证 提交 6c07aac7 编写于 作者: S SunAhong1993 提交者: GitHub

去除Static部分代码 (#600)

* fix the code

* fix the visit_tuple

* Update stargan.md

* Update ultra_light_fast_generic_face_detector.md

* fix the docs

* remove static

* fix

* fix

* fix

* fix the docs
Co-authored-by: Nchanningss <chen_lingchi@163.com>
上级 8f3f7c16
......@@ -2,13 +2,13 @@
该文档梳理了计算loss相关的PyTorch-PaddlePaddle API映射列表。
| 序号 | PyTorch API | PaddlePaddle API | 备注 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | [torch.nn.L1Loss](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html?highlight=l1loss#torch.nn.L1Loss) | [paddle.nn.loss.L1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/L1Loss_cn.html#l1loss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 2 | [torch.nn.MSELoss](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mseloss#torch.nn.MSELoss) | [paddle.nn.MSELoss](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mseloss#torch.nn.MSELoss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 3 | [torch.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss) | [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss) | [差异对比](torch.nn.CrossEntropyLoss.md) |
| 4 | [torch.nn.KLDivLoss](https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html?highlight=kldivloss#torch.nn.KLDivLoss) | [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/KLDivLoss_cn.html) | [差异对比](torch.nn.KLDivLoss.md) |
| 5 | [torch.nn.BCELoss](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html?highlight=bceloss#torch.nn.BCELoss) | [paddle.nn.BCELoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/BCELoss_cn.html#bceloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 6 | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html?highlight=bcewithlogitsloss#torch.nn.BCEWithLogitsLoss) | [paddle.nn.BCEWithLogitsLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/BCEWithLogitsLoss_cn.html#bcewithlogitsloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 7 | [torch.nn.SmoothL1Loss](https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html?highlight=torch%20nn%20smoothl1loss#torch.nn.SmoothL1Loss) | [paddle.nn.SmoothL1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/SmoothL1Loss_cn.html#smoothl1loss) | 功能一致,参数名不一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 1 | [torch.nn.L1Loss](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html?highlight=l1loss#torch.nn.L1Loss) | [paddle.nn.L1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/loss/L1Loss_cn.html#l1loss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 2 | [torch.nn.MSELoss](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mseloss#torch.nn.MSELoss) | [paddle.nn.MSELoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MSELoss_cn.html#mseloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 3 | [torch.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss) | [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/CrossEntropyLoss_cn.html#crossentropyloss) | [差异对比](torch.nn.CrossEntropyLoss.md) |
| 4 | [torch.nn.KLDivLoss](https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html?highlight=kldivloss#torch.nn.KLDivLoss) | [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/KLDivLoss_cn.html#kldivloss) | [差异对比](torch.nn.KLDivLoss.md) |
| 5 | [torch.nn.BCELoss](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html?highlight=bceloss#torch.nn.BCELoss) | [paddle.nn.BCELoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BCELoss_cn.html#bceloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 6 | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html?highlight=bcewithlogitsloss#torch.nn.BCEWithLogitsLoss) | [paddle.nn.BCEWithLogitsLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BCEWithLogitsLoss_cn.html#bcewithlogitsloss) | 功能一致,PyTroch存在废弃参数`size_average``reduce`。 |
| 7 | [torch.nn.SmoothL1Loss](https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html?highlight=torch%20nn%20smoothl1loss#torch.nn.SmoothL1Loss) | [paddle.nn.SmoothL1Loss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/SmoothL1Loss_cn.html#smoothl1loss) | 功能一致,参数名不一致,PyTroch存在废弃参数`size_average``reduce`。 |
***持续更新...***
......@@ -7,7 +7,7 @@ torch.nn.CrossEntropyLoss(weight=None,
reduce=None,
reduction='mean')
```
### [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/CrossEntropyLoss_cn.html#crossentropyloss)
### [paddle.nn.CrossEntropyLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/CrossEntropyLoss_cn.html#crossentropyloss)
```python
paddle.nn.CrossEntropyLoss(weight=None,
ignore_index=-100,
......
......@@ -7,7 +7,7 @@ torch.nn.KLDivLoss(size_average=None,
log_target=False)
```
### [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/loss/KLDivLoss_cn.html)
### [paddle.nn.KLDivLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/KLDivLoss_cn.html#kldivloss)
```python
paddle.nn.KLDivLoss(reduction='mean')
```
......
......@@ -9,7 +9,7 @@ torch.nn.AvgPool1d(kernel_size,
count_include_pad=True)
```
### [paddle.nn.AvgPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/AvgPool1D_cn.html#avgpool1d)
### [paddle.nn.AvgPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/AvgPool1D_cn.html#avgpool1d)
```python
paddle.nn.AvgPool1D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.AvgPool2d(kernel_size,
divisor_override=None)
```
### [paddle.nn.AvgPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/AvgPool2D_cn.html#avgpool2d)
### [paddle.nn.AvgPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/AvgPool2D_cn.html#avgpool2d)
```python
paddle.nn.AvgPool2D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.AvgPool3d(kernel_size,
divisor_override=None)
```
### [paddle.nn.AvgPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/AvgPool3D_cn.html#avgpool3d)
### [paddle.nn.AvgPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/AvgPool3D_cn.html#avgpool3d)
```python
paddle.nn.AvgPool3D(kernel_size,
......
......@@ -7,7 +7,7 @@ torch.nn.BatchNorm1d(num_features,
affine=True,
track_running_stats=True)
```
### [paddle.nn.BatchNorm1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/norm/BatchNorm1D_cn.html#batchnorm1d)
### [paddle.nn.BatchNorm1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BatchNorm1D_cn.html#batchnorm1d)
```python
paddle.nn.BatchNorm1D(num_features,
momentum=0.9,
......
......@@ -7,7 +7,7 @@ torch.nn.BatchNorm2d(num_features,
affine=True,
track_running_stats=True)
```
### [paddle.nn.BatchNorm2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/norm/BatchNorm2D_cn.html#batchnorm2d)
### [paddle.nn.BatchNorm2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BatchNorm2D_cn.html#batchnorm2d)
```python
paddle.nn.BatchNorm2D(num_features,
momentum=0.9,
......
......@@ -7,7 +7,7 @@ torch.nn.BatchNorm3d(num_features,
affine=True,
track_running_stats=True)
```
### [paddle.nn.BatchNorm3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/norm/BatchNorm3D_cn.html#batchnorm3d)
### [paddle.nn.BatchNorm3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/BatchNorm3D_cn.html#batchnorm3d)
```python
paddle.nn.BatchNorm3D(num_features,
momentum=0.9,
......
......@@ -4,7 +4,7 @@
torch.nn.ConstantPad1d(padding, value)
```
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad1D_cn.html#pad1d)
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad1D_cn.html#pad1d)
```python
paddle.nn.Pad1D(padding, mode='constant', value=0.0, data_format='NCL', name=None)
```
......
......@@ -4,7 +4,7 @@
torch.nn.ConstantPad2d(padding, value)
```
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad2D_cn.html#pad2d)
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad2D_cn.html#pad2d)
```python
paddle.nn.Pad2D(padding, mode='constant', value=0.0, data_format='NCHW', name=None)
```
......
......@@ -4,7 +4,7 @@
torch.nn.ConstantPad3d(padding, value)
```
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad3D_cn.html#pad3d)
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad3D_cn.html#pad3d)
```python
paddle.nn.Pad3D(padding, mode='constant', value=0.0, data_format='NCDHW', name=None)
```
......
......@@ -13,7 +13,7 @@ torch.nn.Conv1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv1D_cn.html#conv1d)
### [paddle.nn.Conv1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv1D_cn.html#conv1d)
```python
paddle.nn.Conv1D(in_channels,
......@@ -37,7 +37,7 @@ paddle.nn.Conv1D(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -13,7 +13,7 @@ torch.nn.Conv2d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv2D_cn.html#conv2d)
### [paddle.nn.Conv2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv2D_cn.html#conv2d)
```python
paddle.nn.Conv2D(in_channels,
......@@ -37,7 +37,7 @@ paddle.nn.Conv2D(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding的设置
***PyTorch***`padding`只能支持list或tuple类型。它可以有3种格式:
......
......@@ -13,7 +13,7 @@ torch.nn.Conv3d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv3D_cn.html#conv3d)
### [paddle.nn.Conv3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv3D_cn.html#conv3d)
```python
paddle.nn.Conv3D(in_channels,
......@@ -37,7 +37,7 @@ paddle.nn.Conv3D(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding的设置
***PyTorch***`padding`只能支持list或tuple类型。它可以有3种格式:
(1)包含4个二元组:\[\[0,0\], \[0,0\], \[padding_depth_front, padding_depth_back\], \[padding_height_top, padding_height_bottom\], \[padding_width_left, padding_width_right\]\],其中每个元组都可使用整数值替换,代表元组中的2个值相等;
......
......@@ -13,7 +13,7 @@ torch.nn.ConvTranspose1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv1DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv1DTranspose_cn.html#conv1dtranspose)
### [paddle.nn.Conv1DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv1DTranspose_cn.html#conv1dtranspose)
```python
paddle.nn.Conv1DTranspose(in_channels,
out_channels,
......@@ -34,7 +34,7 @@ paddle.nn.Conv1DTranspose(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding大小的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -13,7 +13,7 @@ torch.nn.ConvTranspose1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv2DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv2DTranspose_cn.html#conv2dtranspose)
### [paddle.nn.Conv2DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv2DTranspose_cn.html#conv2dtranspose)
```python
paddle.nn.Conv2DTranspose(in_channels,
out_channels,
......@@ -34,7 +34,7 @@ paddle.nn.Conv2DTranspose(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding大小的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -13,7 +13,7 @@ torch.nn.ConvTranspose1d(in_channels,
padding_mode='zeros')
```
### [paddle.nn.Conv3DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/conv/Conv3DTranspose_cn.html#conv3dtranspose)
### [paddle.nn.Conv3DTranspose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Conv3DTranspose_cn.html#conv3dtranspose)
```python
paddle.nn.Conv2DTranspose(in_channels,
out_channels,
......@@ -34,7 +34,7 @@ paddle.nn.Conv2DTranspose(in_channels,
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
#### padding大小的设置
***PyTorch***`padding`只能支持list或tuple类型。
......
......@@ -4,7 +4,7 @@
torch.nn.Dropout(p=0.5, inplace=False)
```
### [paddle.nn.Dropout](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Dropout_cn.html#dropout)
### [paddle.nn.Dropout](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Dropout_cn.html#dropout)
```python
paddle.nn.Dropout(p=0.5, axis=None, mode="upscale_in_train”, name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.Dropout2d(p=0.5, inplace=False)
```
### [paddle.nn.Dropout2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Dropout2D_cn.html#dropout2d)
### [paddle.nn.Dropout2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Dropout2D_cn.html#dropout2d)
```python
paddle.nn.Dropout2D(p=0.5, data_format='NCHW', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.Dropout3d(p=0.5, inplace=False)
```
### [paddle.nn.Dropout3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Dropout3D_cn.html#dropout3d)
### [paddle.nn.Dropout3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Dropout3D_cn.html#dropout3d)
```python
paddle.nn.Dropout3D(p=0.5, data_format='NCDHW', name=None)
```
......
......@@ -9,7 +9,7 @@ torch.nn.Embedding(num_embeddings,
scale_grad_by_freq=False,
sparse=False)
```
### [paddle.nn.Embedding](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Embedding_cn.html#embedding)
### [paddle.nn.Embedding](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Embedding_cn.html#embedding)
```python
paddle.nn.Embedding(num_embeddings,
embedding_dim,
......
......@@ -10,7 +10,7 @@ torch.nn.GRU(input_size,
bidirectional=False)
```
### [paddle.nn.GRU](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/rnn/GRU_cn.html#gru)
### [paddle.nn.GRU](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/GRU_cn.html#gru)
```python
paddle.nn.GRU(input_size,
hidden_size,
......@@ -33,4 +33,4 @@ paddle.nn.GRU(input_size,
### 功能差异
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_ih_attr`/`weight_hh_attr`/`bias_ih_attr`/`bias_hh_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_ih_attr`/`bias_hh_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_ih_attr`/`weight_hh_attr`/`bias_ih_attr`/`bias_hh_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_ih_attr`/`bias_hh_attr`设置为bool类型与PyTorch的作用一致。
......@@ -11,7 +11,7 @@ torch.nn.LSTM(input_size,
proj_size=0)
```
### [paddle.nn.LSTM](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/rnn/LSTM_cn.html#lstm)
### [paddle.nn.LSTM](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/LSTM_cn.html#lstm)
```python
paddle.nn.LSTM(input_size,
hidden_size,
......
......@@ -5,7 +5,7 @@
torch.nn.Linear(in_features, out_features, bias=True)
```
### [paddle.nn.Linear](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Linear_cn.html#linear)
### [paddle.nn.Linear](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Linear_cn.html#linear)
```python
paddle.nn.Linear(in_features, out_features, weight_attr=None, bias_attr=None, name=None)
......@@ -15,4 +15,4 @@ torch.nn.Linear(in_features, out_features, bias=True)
#### 更新参数设置
***PyTorch***`bias`默认为True,表示使用可更新的偏置参数。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/param_attr/ParamAttr_cn.html#cn-api-fluid-paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
***PaddlePaddle***`weight_attr`/`bias_attr`默认使用默认的权重/偏置参数属性,否则为指定的权重/偏置参数属性,具体用法参见[ParamAttr](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ParamAttr_cn.html#paramattr);当`bias_attr`设置为bool类型与PyTorch的作用一致。
......@@ -10,7 +10,7 @@ torch.nn.MaxPool1d(kernel_size,
ceil_mode=False)
```
### [paddle.nn.MaxPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/MaxPool1D_cn.html#maxpool1d)
### [paddle.nn.MaxPool1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MaxPool1D_cn.html#maxpool1d)
```python
paddle.nn.MaxPool1D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.MaxPool2d(kernel_size,
ceil_mode=False)
```
### [paddle.nn.MaxPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/MaxPool2D_cn.html#maxpool2d)
### [paddle.nn.MaxPool2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MaxPool2D_cn.html#maxpool2d)
```python
paddle.nn.MaxPool2D(kernel_size,
......
......@@ -10,7 +10,7 @@ torch.nn.MaxPool3d(kernel_size,
ceil_mode=False)
```
### [paddle.nn.MaxPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/pooling/MaxPool3D_cn.html#maxpool3d)
### [paddle.nn.MaxPool3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MaxPool3D_cn.html#maxpool3d)
```python
paddle.nn.MaxPool3D(kernel_size,
......
......@@ -4,7 +4,7 @@
torch.nn.ReflectionPad1d(padding)
```
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad1D_cn.html#pad1d)
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad1D_cn.html#pad1d)
```python
paddle.nn.Pad1D(padding, mode='constant', value=0.0, data_format='NCL', name=None)
```
......
......@@ -4,7 +4,7 @@
torch.nn.ReflectionPad2d(padding)
```
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad2D_cn.html#pad2d)
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad2D_cn.html#pad2d)
```python
paddle.nn.Pad2D(padding, mode='constant', value=0.0, data_format='NCHW', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.ReplicationPad1d(padding)
```
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad1D_cn.html#pad1d)
### [paddle.nn.Pad1D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad1D_cn.html#pad1d)
```python
paddle.nn.Pad1D(padding, mode='constant', value=0.0, data_format='NCL', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.ReplicationPad2d(padding)
```
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad2D_cn.html#pad2d)
### [paddle.nn.Pad2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad2D_cn.html#pad2d)
```python
paddle.nn.Pad2D(padding, mode='constant', value=0.0, data_format='NCHW', name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.nn.ReplicationPad3d(padding)
```
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Pad3D_cn.html#pad3d)
### [paddle.nn.Pad3D](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad3D_cn.html#pad3d)
```python
paddle.nn.Pad3D(padding, mode='constant', value=0.0, data_format='NCDHW', name=None)
```
......
......@@ -6,7 +6,7 @@ torch.nn.Upsample(size=None,
mode='nearest',
align_corners=False)
```
### [paddle.nn.Upsample](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/layer/common/Upsample_cn.html#upsample)
### [paddle.nn.Upsample](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Upsample_cn.html#upsample)
```python
paddle.nn.Upsample(size=None,
scale_factor=None,
......
......@@ -12,7 +12,7 @@ torch.arange(start=0,
device=None,
requires_grad=False)
```
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/arange_cn.html#arange)
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/arange_cn.html#arange)
```python
paddle.arange(start=0,
end=None,
......
......@@ -3,7 +3,7 @@
```python
torch.bernoulli(input, *, generator=None, out=None)
```
### [paddle.bernoulli](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/bernoulli_cn.html#bernoulli)
### [paddle.bernoulli](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/bernoulli_cn.html#bernoulli)
```python
paddle.bernoulli(x, name=None)
```
......
......@@ -12,7 +12,7 @@ torch.empty(*size,
pin_memory=False)
```
### [paddle.empty](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/empty_cn.html#empty)
### [paddle.empty](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/empty_cn.html#empty)
```python
paddle.empty(shape,
......
......@@ -12,7 +12,7 @@ torch.eye(n,
requires_grad=False)
```
### [paddle.eye](https://pytorch.org/docs/stable/generated/torch.eye.html?highlight=eye#torch.eye)
### [paddle.eye](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/eye_cn.html#eye)
```python
paddle.eye(num_rows,
num_columns=None,
......
......@@ -5,7 +5,7 @@
torch.from_numpy(ndarray)
```
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/to_tensor_cn.html#to-tensor)
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/to_tensor_cn.html#to-tensor)
```python
paddle.to_tensor(data,
......
......@@ -12,7 +12,7 @@ torch.full(size,
requires_grad=False)
```
### [paddle.full](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/full_cn.html#full)
### [paddle.full](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/full_cn.html#full)
```python
paddle.full(shape,
fill_value,
......
......@@ -12,7 +12,7 @@ torch.full_like(input,
memory_format=torch.preserve_format)
```
### [paddle.full_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/full_like_cn.html#full-like)
### [paddle.full_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/full_like_cn.html#full-like)
```python
paddle.full_like(x, fill_value, dtype=None, name=None)
......
......@@ -5,7 +5,7 @@
torch.gather(input, dim, index, *, sparse_grad=False, out=None)
```
### [paddle.gather](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/manipulation/gather_cn.html#gather)
### [paddle.gather](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/gather_cn.html#gather)
```python
paddle.gather(x, index, axis=None, name=None)
......
......@@ -12,7 +12,7 @@ torch.linspace(start,
requires_grad=False)
```
### [paddle.linspace](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/layers/linspace_cn.html#linspace)
### [paddle.linspace](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/linspace_cn.html#linspace)
```python
paddle.linspace(start,
stop,
......
......@@ -8,7 +8,7 @@ torch.load(f,
**pickle_load_args)
```
### [paddle.load](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/framework/io/load_cn.html#load)
### [paddle.load](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/load_cn.html#load)
```python
paddle.load(path, **configs)
......
......@@ -3,7 +3,7 @@
```python
torch.multinomial(input, num_samples, replacement=False, *, generator=None, out=None)
```
### [paddle.multinomial](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/multinomial_cn.html#multinomial)
### [paddle.multinomial](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/multinomial_cn.html#multinomial)
```python
paddle.multinomial(x, num_samples=1, replacement=False, name=None)
```
......
......@@ -3,7 +3,7 @@
```python
torch.narrow(input, dim, start, length)
```
### [paddle.slice](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/layers/slice_cn.html#slice)
### [paddle.slice](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/slice_cn.html#slice)
```python
paddle.slice(input, axes, starts, ends)
```
......
......@@ -3,7 +3,7 @@
```python
torch.normal(mean, std, *, generator=None, out=None)
```
### [paddle.normal](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/normal_cn.html#normal)
### [paddle.normal](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/normal_cn.html#normal)
```python
paddle.normal(mean=0.0, std=1.0, shape=None, name=None)
```
......
......@@ -11,7 +11,7 @@ torch.ones(*size,
requires_grad=False)
```
### [paddle.ones](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/ones_cn.html#ones)
### [paddle.ones](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ones_cn.html#ones)
```python
paddle.ones(shape,
......
......@@ -11,7 +11,7 @@ torch.ones_like(input,
memory_format=torch.preserve_format)
```
### [paddle.ones_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/ones_like_cn.html#ones-like)
### [paddle.ones_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/ones_like_cn.html#ones-like)
```python
paddle.ones_like(x, dtype=None, name=None)
......
......@@ -10,7 +10,7 @@ torch.rand(*size,
requires_grad=False)
```
### [paddle.rand](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/rand_cn.html#rand)
### [paddle.rand](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/rand_cn.html#rand)
```python
paddle.rand(shape,
......
......@@ -13,7 +13,7 @@ torch.randint(low=0,
requires_grad=False)
```
### [paddle.randint](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/randint_cn.html#randint)
### [paddle.randint](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/randint_cn.html#randint)
```python
paddle.randint(low=0,
high=None,
......
......@@ -10,7 +10,7 @@ torch.randn(*size,
requires_grad=False)
```
### [paddle.randn](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/randn_cn.html#randn)
### [paddle.randn](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/randn_cn.html#randn)
```python
paddle.randn(shape,
......
......@@ -11,7 +11,7 @@ torch.randperm(n,
requires_grad=False,
pin_memory=False)
```
### [paddle.randperm](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/random/randperm_cn.html#randperm)
### [paddle.randperm](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/randperm_cn.html#randperm)
```python
paddle.randperm(n, dtype='int64', name=None)
```
......
......@@ -12,7 +12,7 @@ torch.range(start=0,
device=None,
requires_grad=False)
```
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/arange_cn.html#arange)
### [paddle.arange](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/arange_cn.html#arange)
```python
paddle.arange(start=0,
end=None,
......
......@@ -8,7 +8,7 @@ torch.save(obj,
pickle_protocol=2)
```
### [paddle.save](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/framework/io/save_cn.html#save)
### [paddle.save](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/save_cn.html#save)
```python
paddle.save(obj, path, pickle_protocol=2)
......
......@@ -9,7 +9,7 @@ torch.tensor(data,
pin_memory=False)
```
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/to_tensor_cn.html#to-tensor)
### [paddle.to_tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/to_tensor_cn.html#to-tensor)
```python
paddle.to_tensor(data,
......
......@@ -5,7 +5,7 @@
torch.transpose(input, dim0, dim1)
```
### [paddle.transpose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/layers/transpose_cn.html#transpose)
### [paddle.transpose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/transpose_cn.html#transpose)
```python
paddle.transpose(x, perm, name=None)
......
......@@ -11,7 +11,7 @@ torch.zeros(*size,
requires_grad=False)
```
### [paddle.zeros](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/zeros_cn.html#zeros)
### [paddle.zeros](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/zeros_cn.html#zeros)
```python
paddle.zeros(shape,
......
......@@ -11,7 +11,7 @@ torch.zeros_like(input,
memory_format=torch.preserve_format)
```
### [paddle.zeros_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/tensor/creation/zeros_like_cn.html#zeros-like)
### [paddle.zeros_like](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/zeros_like_cn.html#zeros-like)
```python
paddle.zeros_like(x, dtype=None, name=None)
......
......@@ -2,14 +2,14 @@
该文档梳理了与数据处理、分布式处理等相关的PyTorch-PaddlePaddle API映射列表。
| 序号 | PyTorch API | PaddlePaddle API | 备注 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | [torch.nn.DataParallel](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html?highlight=dataparallel#torch.nn.DataParallel) | [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dygraph/parallel/DataParallel_cn.html#dataparallel) | [差异对比](torch.nn.DataParallel.md) |
| 2 | [torch.nn.parameter.Parameter](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html?highlight=torch%20nn%20parameter#torch.nn.parameter.Parameter) | [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/ce2bdb0afdc2a09a127e8d9aa394c8b00a877364/python/paddle/fluid/layers/tensor.py#L77) | [差异对比](torch.nn.parameter.Parameter.md) |
| 1 | [torch.nn.DataParallel](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html?highlight=dataparallel#torch.nn.DataParallel) | [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/DataParallel_cn.html#dataparallel) | [差异对比](torch.nn.DataParallel.md) |
| 2 | [torch.nn.parameter.Parameter](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html?highlight=torch%20nn%20parameter#torch.nn.parameter.Parameter) | [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/release/2.1/python/paddle/fluid/layers/tensor.py#L77) | [差异对比](torch.nn.parameter.Parameter.md) |
| 3 | [torch.nn.utils.clip_grad_value_](https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_value_.html?highlight=clip_grad_value_#torch.nn.utils.clip_grad_value_) | 无对应实现 | [组合实现](torch.nn.utils.clip_grad_value_.md) |
| 4 | [torch.utils.data.DataLoader](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader) | [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/reader/DataLoader_cn.html#dataloader) | [差异对比](torch.utils.data.DataLoader.md) |
| 4 | [torch.utils.data.DataLoader](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader) | [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/DataLoader_cn.html#dataloader) | [差异对比](torch.utils.data.DataLoader.md) |
| 5 | [torch.utils.data.random_split](https://pytorch.org/docs/stable/data.html?highlight=random_split#torch.utils.data.random_split) | 无对应实现 | [组合实现](torch.utils.data.random_split.md) |
| 6 | [torch.utils.data.distributed.DistributedSampler](https://pytorch.org/docs/stable/data.html?highlight=distributedsampler#torch.utils.data.distributed.DistributedSampler) | 无对应实现 | [组合实现](torch.utils.data.distributed.DistributedSampler.md) |
| 7 | [torch.utils.data.Dataset](https://pytorch.org/docs/stable/data.html?highlight=torch%20utils%20data%20dataset#torch.utils.data.Dataset) | [paddle.io.Dataset](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/dataset/Dataset_cn.html#dataset) | 功能一致 |
| 8 | [torch.utils.data.BatchSampler](https://pytorch.org/docs/stable/data.html?highlight=batchsampler#torch.utils.data.BatchSampler) | [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/batch_sampler/BatchSampler_cn.html#batchsampler) | [差异对比](torch.utils.data.BatchSampler.md) |
| 9 | [torch.utils.data.Sampler](https://pytorch.org/docs/stable/data.html?highlight=sampler#torch.utils.data.Sampler) | [paddle.io.Sampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/sampler/Sampler_cn.html#sampler) | 功能一致 |
| 7 | [torch.utils.data.Dataset](https://pytorch.org/docs/stable/data.html?highlight=torch%20utils%20data%20dataset#torch.utils.data.Dataset) | [paddle.io.Dataset](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/Dataset_cn.html#dataset) | 功能一致 |
| 8 | [torch.utils.data.BatchSampler](https://pytorch.org/docs/stable/data.html?highlight=batchsampler#torch.utils.data.BatchSampler) | [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/BatchSampler_cn.html#batchsampler) | [差异对比](torch.utils.data.BatchSampler.md) |
| 9 | [torch.utils.data.Sampler](https://pytorch.org/docs/stable/data.html?highlight=sampler#torch.utils.data.Sampler) | [paddle.io.Sampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/Sampler_cn.html#sampler) | 功能一致 |
***持续更新...***
......@@ -4,7 +4,7 @@
torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)
```
### [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dygraph/parallel/DataParallel_cn.html#dataparallel)
### [paddle.DataParallel](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/DataParallel_cn.html#dataparallel)
```python
paddle.DataParallel(layers, strategy=None, comm_buffer_size=25, last_comm_buffer_size=1)
```
......
......@@ -4,7 +4,7 @@
torch.nn.parameter.Parameter(data, requires_grad=True)
```
## [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/ce2bdb0afdc2a09a127e8d9aa394c8b00a877364/python/paddle/fluid/layers/tensor.py#L77)
## [paddle.create_parameter](https://github.com/PaddlePaddle/Paddle/blob/release/2.1/python/paddle/fluid/layers/tensor.py#L77)
```python
paddle.create_parameter(shape,
dtype,
......
......@@ -4,7 +4,7 @@
torch.utils.data.BatchSampler(sampler, batch_size, drop_last)
```
### [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/dataloader/batch_sampler/BatchSampler_cn.html#batchsampler)
### [paddle.io.BatchSampler](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/BatchSampler_cn.html#batchsampler)
```python
paddle.io.BatchSampler(dataset=None, sampler=None, shuffle=Fasle, batch_size=1, drop_last=False)
```
......
......@@ -18,7 +18,7 @@ torch.utils.data.DataLoader(dataset,
persistent_workers=False)
```
### [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fluid/reader/DataLoader_cn.html#dataloader)
### [paddle.io.DataLoader](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/DataLoader_cn.html#dataloader)
```python
paddle.io.DataLoader(dataset,
feed_list=None,
......
......@@ -3,20 +3,20 @@
该文档梳理了与视觉处理相关的PyTorch-PaddlePaddle API映射列表。
| 序号 | PyTorch API | PaddlePaddle API | 备注 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------- |
| 1 | [torchvision.transforms.Compose](https://pytorch.org/vision/stable/transforms.html?highlight=compose#torchvision.transforms.Compose) | [paddle.vision.transforms.Compose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Compose_cn.html#compose) | 功能一致 |
| 1 | [torchvision.transforms.Compose](https://pytorch.org/vision/stable/transforms.html?highlight=compose#torchvision.transforms.Compose) | [paddle.vision.transforms.Compose](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Compose_cn.html#compose) | 功能一致 |
| 2 | [torchvision.transforms.ToPILImage](https://pytorch.org/vision/stable/transforms.html?highlight=topilimage#torchvision.transforms.ToPILImage) | 无对应实现 | [组合实现](torchvision.transforms.ToPILImage.md) |
| 3 | [torchvision.transforms.Resize](https://pytorch.org/vision/stable/transforms.html?highlight=resize#torchvision.transforms.Resize) | [paddle.vision.transforms.Resize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Resize_cn.html#resize) | 功能一致 |
| 4 | [torchvision.transforms.ToTensor](https://pytorch.org/vision/stable/transforms.html?highlight=totensor#torchvision.transforms.ToTensor) | [paddle.vision.transforms.ToTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/ToTensor_cn.html#totensor) | 功能一致 |
| 5 | [torchvision.transforms.RandomHorizontalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomhorizontalflip#torchvision.transforms.RandomHorizontalFlip) | [paddle.vision.transforms.RandomHorizontalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomHorizontalFlip_cn.html#randomhorizontalflip) | 功能一致 |
| 6 | [torchvision.transforms.CenterCrop](https://pytorch.org/vision/stable/transforms.html?highlight=centercrop#torchvision.transforms.CenterCrop) | [paddle.vision.transforms.CenterCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/CenterCrop_cn.html#centercrop) | 功能一致 |
| 7 | [torchvision.transforms.ColorJitter](https://pytorch.org/vision/stable/transforms.html?highlight=colorjitter#torchvision.transforms.ColorJitter) | [paddle.vision.transforms.ColorJitter](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/ColorJitter_cn.html#colorjitter) | 功能一致 |
| 8 | [torchvision.transforms.Grayscale](https://pytorch.org/vision/stable/transforms.html?highlight=grayscale#torchvision.transforms.Grayscale) | [paddle.vision.transforms.Grayscale](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Grayscale_cn.html#grayscale) | 功能一致 |
| 9 | [torchvision.transforms.Normalize](https://pytorch.org/vision/stable/transforms.html?highlight=normalize#torchvision.transforms.Normalize) | [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Normalize_cn.html#normalize) | [差异对比](torchvision.transforms.Normalize.md) |
| 10 | [torchvision.transforms.RandomResizedCrop](https://pytorch.org/vision/stable/transforms.html?highlight=randomresizedcrop#torchvision.transforms.RandomResizedCrop) | [paddle.vision.transforms.RandomResizedCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomResizedCrop_cn.html#randomresizedcrop) | 功能一致 |
| 11 | [torchvision.transforms.Pad](https://pytorch.org/vision/stable/transforms.html?highlight=pad#torchvision.transforms.Pad) | [paddle.vision.transforms.Pad](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Pad_cn.html#pad) | 功能一致 |
| 3 | [torchvision.transforms.Resize](https://pytorch.org/vision/stable/transforms.html?highlight=resize#torchvision.transforms.Resize) | [paddle.vision.transforms.Resize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Resize_cn.html#resize) | 功能一致 |
| 4 | [torchvision.transforms.ToTensor](https://pytorch.org/vision/stable/transforms.html?highlight=totensor#torchvision.transforms.ToTensor) | [paddle.vision.transforms.ToTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/ToTensor_cn.html#totensor) | 功能一致 |
| 5 | [torchvision.transforms.RandomHorizontalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomhorizontalflip#torchvision.transforms.RandomHorizontalFlip) | [paddle.vision.transforms.RandomHorizontalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomHorizontalFlip_cn.html#randomhorizontalflip) | 功能一致 |
| 6 | [torchvision.transforms.CenterCrop](https://pytorch.org/vision/stable/transforms.html?highlight=centercrop#torchvision.transforms.CenterCrop) | [paddle.vision.transforms.CenterCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/CenterCrop_cn.html#centercrop) | 功能一致 |
| 7 | [torchvision.transforms.ColorJitter](https://pytorch.org/vision/stable/transforms.html?highlight=colorjitter#torchvision.transforms.ColorJitter) | [paddle.vision.transforms.ColorJitter](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/ColorJitter_cn.html#colorjitter) | 功能一致 |
| 8 | [torchvision.transforms.Grayscale](https://pytorch.org/vision/stable/transforms.html?highlight=grayscale#torchvision.transforms.Grayscale) | [paddle.vision.transforms.Grayscale](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Grayscale_cn.html#grayscale) | 功能一致 |
| 9 | [torchvision.transforms.Normalize](https://pytorch.org/vision/stable/transforms.html?highlight=normalize#torchvision.transforms.Normalize) | [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Normalize_cn.html#normalize) | [差异对比](torchvision.transforms.Normalize.md) |
| 10 | [torchvision.transforms.RandomResizedCrop](https://pytorch.org/vision/stable/transforms.html?highlight=randomresizedcrop#torchvision.transforms.RandomResizedCrop) | [paddle.vision.transforms.RandomResizedCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomResizedCrop_cn.html#randomresizedcrop) | 功能一致 |
| 11 | [torchvision.transforms.Pad](https://pytorch.org/vision/stable/transforms.html?highlight=pad#torchvision.transforms.Pad) | [paddle.vision.transforms.Pad](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Pad_cn.html#pad) | 功能一致 |
| 12 | [torchvision.transforms.RandomCrop](https://pytorch.org/vision/stable/transforms.html?highlight=randomcrop#torchvision.transforms.RandomCrop) | [paddle.vision.transforms.RandomCrop](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomCrop_cn.html#randomcrop) | 功能一致 |
| 13 | [torchvision.transforms.RandomRotation](https://pytorch.org/vision/stable/transforms.html?highlight=randomrotation#torchvision.transforms.RandomRotation) | [paddle.vision.transforms.RandomRotation](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomRotation_cn.html#daimashili) | 功能一致 |
| 14 | [torchvision.transforms.RandomVerticalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomverticalflip#torchvision.transforms.RandomVerticalFlip) | [paddle.vision.transforms.RandomVerticalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/RandomVerticalFlip_cn.html#randomverticalflip) | 功能一致 |
| 13 | [torchvision.transforms.RandomRotation](https://pytorch.org/vision/stable/transforms.html?highlight=randomrotation#torchvision.transforms.RandomRotation) | [paddle.vision.transforms.RandomRotation](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomCrop_cn.html#randomcrop) | 功能一致 |
| 14 | [torchvision.transforms.RandomVerticalFlip](https://pytorch.org/vision/stable/transforms.html?highlight=randomverticalflip#torchvision.transforms.RandomVerticalFlip) | [paddle.vision.transforms.RandomVerticalFlip](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/RandomVerticalFlip_cn.html#randomverticalflip) | 功能一致 |
| 15 | [torchvision.transforms.Lambda](https://pytorch.org/vision/stable/transforms.html?highlight=lambda#torchvision.transforms.Lambda) | 无对应实现 | [组合实现](torchvision.transforms.Lambda.md) |
| 17 | [torchvision.utils.save_image](https://pytorch.org/vision/stable/utils.html?highlight=save_image#torchvision.utils.save_image) | 无对应实现 | [组合实现](torchvision.utils.save_image.md) |
| 18 | [torchvision.models 系列模型](https://pytorch.org/vision/stable/models.html?highlight=torchvision%20models) | X2Paddle提供 | [使用方式](torchvision.models.md) |
......
......@@ -4,7 +4,7 @@
torchvision.transforms.Normalize(mean, std, inplace=False)
```
### [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/transforms/Normalize_cn.html#normalize)
### [paddle.vision.transforms.Normalize](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/vision/transforms/Normalize_cn.html#normalize)
```python
paddle.vision.transforms.Normalize(mean=0.0, std=1.0, data_format='CHW', to_rgb=False, keys=None)
```
......
......@@ -70,12 +70,6 @@ def arg_parser():
action="store_true",
default=False,
help="define input shape for tf model")
parser.add_argument(
"--paddle_type",
"-pt",
type=_text_type,
default="dygraph",
help="define the paddle model type after converting(dygraph/static)")
parser.add_argument(
"--convert_torch_project",
"-tp",
......@@ -97,10 +91,7 @@ def arg_parser():
return parser
def tf2paddle(model_path,
save_dir,
define_input_shape=False,
paddle_type="dygraph"):
def tf2paddle(model_path, save_dir, define_input_shape=False):
# check tensorflow installation and version
try:
import os
......@@ -119,32 +110,21 @@ def tf2paddle(model_path,
return
from x2paddle.decoder.tf_decoder import TFDecoder
if paddle_type == "dygraph":
from x2paddle.op_mapper.dygraph.tf2paddle.tf_op_mapper import TFOpMapper
else:
from x2paddle.op_mapper.static.tf2paddle.tf_op_mapper import TFOpMapper
from x2paddle.op_mapper.tf2paddle.tf_op_mapper import TFOpMapper
print("Now translating model from tensorflow to paddle.")
model = TFDecoder(model_path, define_input_shape=define_input_shape)
mapper = TFOpMapper(model)
mapper.paddle_graph.build()
if paddle_type == "dygraph":
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="tf", paddle_type=paddle_type)
graph_opt.optimize(mapper.paddle_graph)
else:
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="tf", paddle_type=paddle_type)
graph_opt.optimize(mapper.paddle_graph)
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="tf")
graph_opt.optimize(mapper.paddle_graph)
mapper.paddle_graph.gen_model(save_dir)
def caffe2paddle(proto, weight, save_dir, caffe_proto, paddle_type):
def caffe2paddle(proto, weight, save_dir, caffe_proto):
from x2paddle.decoder.caffe_decoder import CaffeDecoder
if paddle_type == "dygraph":
from x2paddle.op_mapper.dygraph.caffe2paddle.caffe_op_mapper import CaffeOpMapper
else:
from x2paddle.op_mapper.static.caffe2paddle.caffe_op_mapper import CaffeOpMapper
from x2paddle.op_mapper.caffe2paddle.caffe_op_mapper import CaffeOpMapper
import google.protobuf as gpb
ver_part = gpb.__version__.split('.')
version_satisfy = False
......@@ -158,13 +138,13 @@ def caffe2paddle(proto, weight, save_dir, caffe_proto, paddle_type):
mapper.paddle_graph.build()
print("Model optimizing ...")
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(source_frame="caffe", paddle_type=paddle_type)
graph_opt = GraphOptimizer(source_frame="caffe")
graph_opt.optimize(mapper.paddle_graph)
print("Model optimized.")
mapper.paddle_graph.gen_model(save_dir)
def onnx2paddle(model_path, save_dir, paddle_type):
def onnx2paddle(model_path, save_dir):
# check onnx installation and version
try:
import onnx
......@@ -178,10 +158,7 @@ def onnx2paddle(model_path, save_dir, paddle_type):
print("Now translating model from onnx to paddle.")
from x2paddle.decoder.onnx_decoder import ONNXDecoder
if paddle_type == "dygraph":
from x2paddle.op_mapper.dygraph.onnx2paddle.onnx_op_mapper import ONNXOpMapper
else:
from x2paddle.op_mapper.static.onnx2paddle.onnx_op_mapper import ONNXOpMapper
from x2paddle.op_mapper.onnx2paddle.onnx_op_mapper import ONNXOpMapper
model = ONNXDecoder(model_path)
mapper = ONNXOpMapper(model)
mapper.paddle_graph.build()
......@@ -206,7 +183,7 @@ def pytorch2paddle(module, save_dir, jit_type="trace", input_examples=None):
print("Now translating model from pytorch to paddle.")
from x2paddle.decoder.pytorch_decoder import ScriptDecoder, TraceDecoder
from x2paddle.op_mapper.dygraph.pytorch2paddle.pytorch_op_mapper import PyTorchOpMapper
from x2paddle.op_mapper.pytorch2paddle.pytorch_op_mapper import PyTorchOpMapper
if jit_type == "trace":
model = TraceDecoder(module, input_examples)
......@@ -216,8 +193,7 @@ def pytorch2paddle(module, save_dir, jit_type="trace", input_examples=None):
mapper.paddle_graph.build()
print("Model optimizing ...")
from x2paddle.optimizer.optimizer import GraphOptimizer
graph_opt = GraphOptimizer(
source_frame="pytorch", paddle_type="dygraph", jit_type=jit_type)
graph_opt = GraphOptimizer(source_frame="pytorch", jit_type=jit_type)
graph_opt.optimize(mapper.paddle_graph)
print("Model optimized.")
mapper.paddle_graph.gen_model(save_dir, jit_type=jit_type)
......@@ -242,8 +218,6 @@ def main():
if not args.convert_torch_project:
assert args.framework is not None, "--framework is not defined(support tensorflow/caffe/onnx)"
assert args.save_dir is not None, "--save_dir is not defined"
assert args.paddle_type in ["dygraph", "static"
], "--paddle_type must be 'dygraph' or 'static'"
try:
import platform
......@@ -274,16 +248,15 @@ def main():
define_input_shape = False
if args.define_input_shape:
define_input_shape = True
tf2paddle(args.model, args.save_dir, define_input_shape,
args.paddle_type)
tf2paddle(args.model, args.save_dir, define_input_shape)
elif args.framework == "caffe":
assert args.prototxt is not None and args.weight is not None, "--prototxt and --weight should be defined while translating caffe model"
caffe2paddle(args.prototxt, args.weight, args.save_dir,
args.caffe_proto, args.paddle_type)
args.caffe_proto)
elif args.framework == "onnx":
assert args.model is not None, "--model should be defined while translating onnx model"
onnx2paddle(args.model, args.save_dir, args.paddle_type)
onnx2paddle(args.model, args.save_dir)
elif args.framework == "paddle2onnx":
print(
"Paddle to ONNX tool has been migrated to the new github: https://github.com/PaddlePaddle/paddle2onnx"
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from x2paddle.core.graph import GraphNode
from x2paddle.core.util import *
import collections
import six
class Layer(object):
def __init__(self):
self.op = None
self.param_attr = dict()
self.inputs = dict()
self.output = None
self.is_custom_layer = False
self.use_fluid = False
def get_code(self):
layer_code = ""
if self.output is not None:
if isinstance(self.output, six.string_types):
layer_code = self.output + " = "
else:
layer_code = self.output.layer_name + " = "
if self.is_custom_layer:
layer_code = layer_code + self.op + "("
elif self.op == "=":
layer_code = layer_code
elif self.use_fluid:
layer_code = layer_code + "fluid." + self.op + "("
elif self.op == "full_like":
layer_code = layer_code + "paddle." + self.op + "("
else:
layer_code = layer_code + "fluid.layers." + self.op + "("
if isinstance(self.inputs, list):
in_list = "["
for input in self.inputs:
if isinstance(input, GraphNode):
if hasattr(input, "index"):
in_list += (
input.layer_name + "[{}]".format(input.index) + ", "
)
else:
in_list += (input.layer_name + ", ")
elif isinstance(input, six.string_types):
in_list += (input + ", ")
else:
raise Exception(
"Element of inputs should GraphNode or String")
in_list = in_list.strip(", ") + "], "
layer_code += in_list
elif isinstance(self.inputs, dict):
inputs = collections.OrderedDict(self.inputs)
for key, input in inputs.items():
if isinstance(input, GraphNode):
if hasattr(input, "index"):
layer_code = layer_code + key + "={}, ".format(
input.layer_name + "[{}]".format(input.index))
else:
layer_code = layer_code + key + "={}, ".format(
input.layer_name)
else:
layer_code = layer_code + key + "={}, ".format(input)
elif isinstance(self.inputs, GraphNode):
if hasattr(self.inputs, "index"):
layer_code += (
self.inputs.layer_name + "[{}]".format(self.inputs.index))
else:
layer_code += (self.inputs.layer_name)
if self.op != "=":
layer_code += ", "
elif isinstance(self.inputs, six.string_types):
layer_code += (self.inputs)
if self.op != "=":
layer_code += ", "
else:
raise Exception("Unknown type of inputs.")
param_attr = collections.OrderedDict(self.param_attr)
for key, value in param_attr.items():
if '\n' in str(value):
value = string(str(value).replace('\n', ','))
if str(key) == 'attr':
value = 'ParamAttr(' + str(value) + ')'
layer_code = layer_code + key + "={}, ".format(value)
layer_code = layer_code.strip(", ")
if self.op != "=":
layer_code += ")"
return layer_code
class FluidCode(object):
def __init__(self):
self.layers = list()
def add_layer(self,
op,
inputs,
output,
param_attr=None,
use_fluid=False,
is_custom_layer=False):
layer = Layer()
layer.op = op
layer.use_fluid = use_fluid
layer.is_custom_layer = is_custom_layer
if inputs is not None:
layer.inputs = inputs
layer.output = output
if param_attr is not None:
layer.param_attr = param_attr
self.layers.append(layer)
def add_note(self, note):
# note should be string
self.layers.append(note)
def clear(self):
self.layers = list()
def gen_codes(self):
codes = list()
for layer in self.layers:
if isinstance(layer, Layer):
codes.append(layer.get_code())
elif isinstance(layer, six.string_types):
codes.append(layer)
return codes
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
from paddle.fluid.proto import framework_pb2
from x2paddle.core.util import *
import inspect
import os
def export_paddle_param(param, param_name, dir):
dtype_map = {
"int16": [framework_pb2.VarType.INT16, 'h'],
"int32": [framework_pb2.VarType.INT32, 'i'],
"int64": [framework_pb2.VarType.INT64, 'q'],
"float16": [framework_pb2.VarType.FP16, 'e'],
"float32": [framework_pb2.VarType.FP32, 'f'],
"float64": [framework_pb2.VarType.FP64, 'd'],
"bool": [framework_pb2.VarType.BOOL, None]
}
shape = param.shape
if str(param.dtype) in ['uint8', 'uint_8', 'bool']:
param = param.astype('int64')
if len(shape) == 0:
assert param.size == 1, "Unexpected situation happend!"
shape = [1]
assert str(
param.dtype) in dtype_map, "Unknown dtype {} of params: {}.".format(
str(param.dtype), param_name)
fp = open(os.path.join(dir, param_name), 'wb')
numpy.array([0], dtype='int32').tofile(fp)
numpy.array([0], dtype='int64').tofile(fp)
numpy.array([0], dtype='int32').tofile(fp)
tensor_desc = framework_pb2.VarType.TensorDesc()
tensor_desc.data_type = dtype_map[str(param.dtype)][0]
tensor_desc.dims.extend(shape)
desc_size = tensor_desc.ByteSize()
numpy.array([desc_size], dtype='int32').tofile(fp)
fp.write(tensor_desc.SerializeToString())
param.tofile(fp)
fp.close()
# This func will copy to generate code file
def run_net(param_dir="./"):
import os
inputs, outputs = x2paddle_net()
ops = fluid.default_main_program().global_block().ops
used_vars = list()
for op in ops:
used_vars += op.input_arg_names
tmp = list()
for input in inputs:
if isinstance(input, list):
for ipt in input:
if ipt.name not in used_vars:
continue
tmp.append(ipt)
else:
if input.name not in used_vars:
continue
tmp.append(input)
inputs = tmp
for i, out in enumerate(outputs):
if isinstance(out, list):
for out_part in out:
outputs.append(out_part)
del outputs[i]
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
def if_exist(var):
b = os.path.exists(os.path.join(param_dir, var.name))
return b
fluid.io.load_vars(
exe, param_dir, fluid.default_main_program(), predicate=if_exist)
class OpMapper(object):
def __init__(self):
self.paddle_codes = ""
self.tab = " "
self.net_code = list()
self.weights = dict()
self.inputs = list()
self.outputs = list()
def op_checker(self):
unsupported_ops = set()
for node_name in self.graph.topo_sort:
node = self.graph.get_node(node_name)
op = node.layer_type
if not hasattr(self, op):
unsupported_ops.add(op)
if len(unsupported_ops) == 0:
return True
else:
print("There are {} ops not supported yet, list as below".format(
len(unsupported_ops)))
for op in unsupported_ops:
print(op)
return False
def add_codes(self, codes, indent=0):
if isinstance(codes, list):
for code in codes:
self.paddle_codes += (
self.tab * indent + code.strip('\n') + '\n')
elif isinstance(codes, str):
self.paddle_codes += (self.tab * indent + codes.strip('\n') + '\n')
else:
raise Exception("Unknown type of codes")
def add_heads(self):
self.add_codes("from paddle.fluid.initializer import Constant")
self.add_codes("from paddle.fluid.param_attr import ParamAttr")
self.add_codes("import paddle.fluid as fluid")
self.add_codes("import paddle")
self.add_codes("")
def save_inference_model(self, save_dir, params_merge):
self.save_python_model(save_dir)
import sys
import paddle.fluid as fluid
py_code_dir = os.path.join(save_dir, "model_with_code")
sys.path.append(py_code_dir)
import model
try:
inputs, outputs = model.x2paddle_net()
ops = fluid.default_main_program().global_block().ops
used_vars = list()
for op in ops:
used_vars += op.input_arg_names
for i, out in enumerate(outputs):
if isinstance(out, list):
for out_part in out:
outputs.append(out_part)
del outputs[i]
input_names = list()
for input in inputs:
if isinstance(input, list):
for ipt in input:
if ipt.name not in used_vars:
continue
input_names.append(ipt.name)
else:
if input.name not in used_vars:
continue
input_names.append(input.name)
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
def if_exist(var):
b = os.path.exists(
os.path.join(os.path.join(py_code_dir, var.name)))
return b
fluid.io.load_vars(
exe,
py_code_dir,
fluid.default_main_program(),
predicate=if_exist)
if params_merge:
fluid.io.save_inference_model(
dirname=os.path.join(save_dir, "inference_model"),
feeded_var_names=input_names,
target_vars=outputs,
executor=exe,
params_filename="__params__")
else:
fluid.io.save_inference_model(
dirname=os.path.join(save_dir, "inference_model"),
feeded_var_names=input_names,
target_vars=outputs,
executor=exe,
params_filename=None)
except:
raise Exception(
"Paddle code was saved in {}/model.py, but seems there's wrong exist, please check model.py manually."
.format(py_code_dir))
def save_python_model(self, save_dir):
if not os.path.exists(save_dir):
os.makedirs(save_dir)
py_code_dir = os.path.join(save_dir, "model_with_code")
if not os.path.exists(py_code_dir):
os.makedirs(py_code_dir)
for name, param in self.weights.items():
export_paddle_param(param, name, py_code_dir)
self.add_heads()
if hasattr(self, "used_custom_layers"):
for _, layer_code in self.used_custom_layers.items():
self.add_codes(layer_code, 0)
self.add_codes("", 0)
self.add_codes("\ndef x2paddle_net():", 0)
self.add_codes("paddle.enable_static()", 1)
for i in range(len(self.graph.topo_sort)):
node_name = self.graph.topo_sort[i]
node = self.graph.get_node(node_name)
if node is None:
continue
if len(node.fluid_code.layers) == 0:
continue
self.add_codes(node.fluid_code.gen_codes(), 1)
self.add_codes("", 0)
input_str = "["
for name in self.graph.input_nodes:
input_str += (name + ", ")
input_str = input_str.strip(", ") + "]"
output_str = "["
for name in self.graph.output_nodes:
output_str += (name + ", ")
output_str = output_str.strip(", ") + "]"
return_code = "return {}, {}".format(input_str, output_str)
self.add_codes(return_code, 1)
self.add_codes("", 0)
self.add_codes(inspect.getsourcelines(run_net)[0])
fp = open(os.path.join(py_code_dir, "model.py"), 'w')
fp.write(self.paddle_codes)
fp.close()
此差异已折叠。
......@@ -17,7 +17,6 @@ import sys
from google.protobuf import text_format
import numpy as np
from x2paddle.core.graph import GraphNode, Graph
from x2paddle.core.fluid_code import FluidCode
from x2paddle.decoder import caffe_shape_inference
......@@ -55,18 +54,17 @@ class CaffeGraphNode(GraphNode):
super(CaffeGraphNode, self).__init__(
layer, layer_name.replace('/', '_').replace('-', '_').lower())
self.layer_type = type_str
self.fluid_code = FluidCode()
self.data = None
def set_params(self, params):
self.data = params
@property
def name(self):
if hasattr(self, 'index'):
return "{}_p{}".format(self.layer_name, self.index)
return self.layer_name
@property
def out_shapes(self):
return self._out_shapes
......@@ -74,7 +72,7 @@ class CaffeGraphNode(GraphNode):
@out_shapes.setter
def out_shapes(self, value):
self._out_shapes = value
@property
def in_shapes(self):
return self._in_shapes
......@@ -258,13 +256,14 @@ class CaffeGraph(Graph):
assert input_node_name in self.node_map, 'The {} isn\'t a valid node'.format(
name)
input_node = self.node_map[input_node_name]
if len(input_node.layer.top) > 1 and input_node.layer_type not in ["Input", "MemoryData"]:
if len(input_node.layer.top
) > 1 and input_node.layer_type not in ["Input", "MemoryData"]:
need_idx = list(input_node.layer.top).index(node.layer.bottom[idx])
name = input_node_name + ':' + str(need_idx)
else:
name = input_node_name
return self.get_node(name, copy=copy)
def set_node_shape(self, node):
inputs = node.inputs
input_shape = []
......@@ -289,7 +288,7 @@ class CaffeDecoder(object):
with open(proto_path, 'rb') as proto_file:
proto_str = proto_file.read()
text_format.Merge(proto_str, self.net)
self.load_using_pb()
self.caffe_graph = CaffeGraph(self.net, self.params,
......
......@@ -13,7 +13,6 @@
# limitations under the License.
from x2paddle.core.graph import GraphNode, Graph
from x2paddle.core.fluid_code import FluidCode
from x2paddle.decoder.onnx_shape_inference import SymbolicShapeInference
from onnx.checker import ValidationError
from onnx.checker import check_model
......@@ -44,7 +43,6 @@ class ONNXGraphNode(GraphNode):
else:
super(ONNXGraphNode, self).__init__(layer, layer_name)
self.layer_type = layer.op_type
self.fluid_code = FluidCode()
self.attr_map = self.get_attr_map()
self.out_shapes = list()
self.dtype = None
......@@ -65,7 +63,7 @@ class ONNXGraphNode(GraphNode):
if 'value' not in self.attr_map:
return None
return self.attr_map['value']
@property
def name(self):
if hasattr(self, 'index'):
......@@ -97,8 +95,10 @@ class ONNXGraphNode(GraphNode):
return self.attr_map[name]
def output(self, index=0):
if index >0 and len(self.layer.output) <= index:
raise IndexError('Output numbers of Node:{} is {} <= index:{}'.format(self.layer_name, len(self.layer.output), index))
if index > 0 and len(self.layer.output) <= index:
raise IndexError(
'Output numbers of Node:{} is {} <= index:{}'.format(
self.layer_name, len(self.layer.output), index))
return self.layer.output[index]
......@@ -113,7 +113,6 @@ class ONNXGraphDataNode(GraphNode):
else:
self.layer_type = 'create_parameter'
self.layer_name = layer_name
self.fluid_code = FluidCode()
self.weight = None
self.embeded_as = None
self.which_child = {}
......@@ -147,7 +146,7 @@ class ONNXGraphDataNode(GraphNode):
out_shapes = list()
out_shapes.append(values)
return out_shapes
@property
def name(self):
return self.layer_name
......@@ -320,7 +319,8 @@ class ONNXGraph(Graph):
if first_i == n_i:
continue
if n_ipt == nd.name:
new_nd_name = "{}/{}".format(nd.name, n_i)
new_nd_name = "{}/{}".format(nd.name,
n_i)
if new_nd_name not in node.which_child:
node.which_child[new_nd_name] = idx
break
......@@ -350,9 +350,8 @@ class ONNXGraph(Graph):
else:
if ipt_node.layer_name in node.which_child:
ipt_node.index = node.which_child[ipt_node.layer_name]
return ipt_node
def graph_weights(self):
"""
......
......@@ -13,7 +13,6 @@
# limitations under the License.
from x2paddle.core.graph import GraphNode, Graph
from x2paddle.core.fluid_code import FluidCode
from tensorflow.python.framework import tensor_util
from tensorflow.core.framework import attr_value_pb2
import tensorflow as tf
......@@ -36,7 +35,6 @@ class TFGraphNode(GraphNode):
self.layer_type = layer.op
self.tf_data_format = data_format
self.pd_data_format = "NCHW"
self.fluid_code = FluidCode()
self.dtype_map = {
1: "float32",
......@@ -174,7 +172,6 @@ class TFGraph(Graph):
self._optimize_dialiation_conv()
self._remove_identity_node()
self._remove_cast_node()
def get_node(self, node_name, copy=False):
items = node_name.strip().split(':')
......@@ -191,7 +188,7 @@ class TFGraph(Graph):
if len(items) == 1 and node.layer_type in self.multi_out_ops:
node.index = 0
return node
def get_input_node(self, node, idx=0, copy=False):
input_node_name = node.layer.input[idx]
if idx > 0:
......@@ -289,7 +286,6 @@ class TFGraph(Graph):
self.output_nodes.pop(idx)
else:
self.output_nodes[idx] = input_node.layer_name
def _remove_cast_node(self):
cast_node = list()
......@@ -442,7 +438,7 @@ class TFDecoder(object):
if shape.count(None) > 0:
shape[shape.index(None)] = -1
self.inputs_info["x2paddle_{}".format(layer.name)] = (shape,
dtype)
dtype)
else:
value = graph_node.layer.attr["shape"].shape
shape = [dim.size for dim in value.dim]
......@@ -466,13 +462,15 @@ class TFDecoder(object):
for b in batch_size:
for input_name, info in self.inputs_info.items():
(shape, dtype) = cp.deepcopy(info)
input_tensor = self.sess.graph.get_tensor_by_name(input_name + ":0")
input_tensor = self.sess.graph.get_tensor_by_name(input_name +
":0")
if shape.count(-1) > 0:
shape[shape.index(-1)] = b
feed[input_tensor] = numpy.random.random_sample(shape)
output_tensor = self.sess.graph.get_tensor_by_name(tensor_name)
if use_diff_inputs:
results.append(self.sess.run([output_tensor], feed)[0].flatten())
results.append(
self.sess.run([output_tensor], feed)[0].flatten())
else:
return self.sess.run([output_tensor], feed)[0]
......@@ -499,4 +497,3 @@ class TFDecoder(object):
return results[0].tolist()
else:
raise Exception("Couldn't infer a stable shape shape tensor value")
\ No newline at end of file
......@@ -12,9 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from .detectionoutput import DetectionOutput
from .normalize import Normalize
from .priorbox import PriorBox
from .roipooling import ROIPooling
from .select import Select
\ No newline at end of file
from .select import Select
......@@ -15,16 +15,19 @@
import paddle
import paddle.fluid as fluid
class DetectionOutput(object):
def __init__(self, nms_threshold, nms_top_k, keep_top_k, nms_eta, score_threshold, background_label):
def __init__(self, nms_threshold, nms_top_k, keep_top_k, nms_eta,
score_threshold, background_label):
self.detection_output_layer_attrs = {
"background_label": background_label,
"nms_threshold": nms_threshold,
"nms_top_k": nms_top_k,
"keep_top_k": keep_top_k,
"score_threshold": score_threshold,
"nms_eta": nms_eta}
"nms_eta": nms_eta
}
def __call__(self, x0, x1, x2):
priorbox_list = paddle.split(x2, num_or_sections=2, axis=1)
pb = priorbox_list[0]
......@@ -34,11 +37,10 @@ class DetectionOutput(object):
pb_dim = fluid.layers.shape(pb)[0]
loc = paddle.reshape(x0, shape=[-1, pb_dim, 4])
conf_flatten = paddle.reshape(x1, shape=[0, pb_dim, -1])
out = fluid.layers.detection_output(loc=loc,
scores=conf_flatten,
prior_box=pb,
prior_box_var=pbv,
**self.detection_output_layer_attrs)
out = fluid.layers.detection_output(
loc=loc,
scores=conf_flatten,
prior_box=pb,
prior_box_var=pbv,
**self.detection_output_layer_attrs)
return out
\ No newline at end of file
......@@ -15,10 +15,11 @@
import paddle
import paddle.fluid as fluid
class Normalize(object):
def __init__(self, axis):
self.axis = axis
def __call__(self, x, param):
l2_norm = fluid.layers.l2_normalize(x=x, axis=1)
param = paddle.reshape(param, [param.shape[-1]])
......@@ -32,5 +33,3 @@ class Normalize(object):
perm.insert(self.axis, dim)
out = paddle.transpose(out, perm=perm)
return out
\ No newline at end of file
......@@ -15,11 +15,10 @@
import paddle
import paddle.fluid as fluid
class PriorBox(object):
def __init__(self, min_sizes, max_sizes,
aspect_ratios, variance, flip,
clip, steps, offset,
min_max_aspect_ratios_order):
def __init__(self, min_sizes, max_sizes, aspect_ratios, variance, flip,
clip, steps, offset, min_max_aspect_ratios_order):
self.priorbox_layer_attrs = {
"min_sizes": min_sizes,
"max_sizes": max_sizes,
......@@ -29,13 +28,13 @@ class PriorBox(object):
"clip": clip,
"steps": steps,
"offset": offset,
"min_max_aspect_ratios_order": min_max_aspect_ratios_order}
"min_max_aspect_ratios_order": min_max_aspect_ratios_order
}
def __call__(self, x0, x1):
box, var = fluid.layers.prior_box(input=x0,
image=x1,
**self.priorbox_layer_attrs)
box, var = fluid.layers.prior_box(
input=x0, image=x1, **self.priorbox_layer_attrs)
box = paddle.reshape(x=box, shape=[1, 1, -1])
var = paddle.reshape(x=var, shape=[1, 1, -1])
out = paddle.concat(x=[box, var], axis=1)
return out
\ No newline at end of file
return out
......@@ -15,19 +15,17 @@
import paddle
import paddle.fluid as fluid
class ROIPooling(object):
def __init__(self, pooled_height, pooled_width, spatial_scale):
self.roipooling_layer_attrs = {
"pooled_height": pooled_height,
"pooled_width": pooled_width,
"spatial_scale": spatial_scale}
"spatial_scale": spatial_scale
}
def __call__(self, x0, x1):
slice_x1 = paddle.slice(input=x1, axes=[1],
starts=[1], ends=[5])
out = fluid.layers.roi_pool(input=x0,
rois=slice_x1,
**self.roipooling_layer_attrs)
slice_x1 = paddle.slice(input=x1, axes=[1], starts=[1], ends=[5])
out = fluid.layers.roi_pool(
input=x0, rois=slice_x1, **self.roipooling_layer_attrs)
return out
\ No newline at end of file
......@@ -15,21 +15,18 @@
import paddle
import paddle.fluid as fluid
class Select(object):
def __init__(self, input_shape, point, axis):
self.point = point
self.input_shape = input_shape
self.axis = axis
def __call__(self, x):
start = self.point[0]
if len(self.point) == 2:
end = self.point[1]
else:
end = self.input_shape[self.axis]
out = paddle.slice(x=x,
start=start,
end=end,
axes=[self.axis])
out = paddle.slice(x=x, start=start, end=end, axes=[self.axis])
return out
\ No newline at end of file
......@@ -14,17 +14,14 @@
import paddle
class LocalResponseNorm(object):
def __init__(self,
size,
alpha=1e-4,
beta=0.75,
k=1.):
def __init__(self, size, alpha=1e-4, beta=0.75, k=1.):
self.size = size
self.alpha = alpha
self.beta = beta
self.k = k
def __call__(self, x):
sizes = x.shape
dim = len(sizes)
......@@ -62,4 +59,4 @@ class LocalResponseNorm(object):
div = paddle.scale(div, scale=self.alpha, bias=self.k)
div = paddle.pow(div, self.beta)
res = paddle.divide(x, div)
return res
\ No newline at end of file
return res
......@@ -14,10 +14,11 @@
import paddle
class OneHot(object):
def __init__(self, axis):
self.axis = axis
def __call__(self, indices, depth, values):
indices_shape = indices.shape
rank = len(indices.shape)
......@@ -25,14 +26,16 @@ class OneHot(object):
if self.axis < 0:
real_axis = self.axis + rank + 1
depth_range = paddle.arange(end=depth)
ls = tuple(indices_shape[0: real_axis])
rs = tuple(indices_shape[real_axis: rank])
targets = paddle.reshape(depth_range, (1,) * (real_axis-0) + tuple(depth_range.shape) + (1,) * (rank-real_axis))
ls = tuple(indices_shape[0:real_axis])
rs = tuple(indices_shape[real_axis:rank])
targets = paddle.reshape(depth_range, (1, ) *
(real_axis - 0) + tuple(depth_range.shape) +
(1, ) * (rank - real_axis))
mod = paddle.mod(indices, depth)
v = paddle.reshape(mod, ls + (1,) + rs)
v = paddle.reshape(mod, ls + (1, ) + rs)
out = targets == v
out = paddle.cast(out, "float32")
on_value = paddle.slice(values, axes=[0], starts=[1], ends=[2])
off_value = paddle.slice(values, axes=[0], starts=[0], ends=[1])
out = out * (on_value - off_value) + off_value
return out
\ No newline at end of file
return out
......@@ -15,6 +15,7 @@
import paddle
from x2paddle.core.util import *
class PadAllDim2(object):
def __init__(self, value, mode):
self.layer_attrs = {}
......@@ -22,7 +23,6 @@ class PadAllDim2(object):
self.layer_attrs['data_format'] = 'NCHW'
self.layer_attrs['value'] = value
def __call__(self, x, pad):
pad = paddle.reshape(pad, shape=[2, -1])
pad = paddle.transpose(pad, perm=[1, 0])
......@@ -32,4 +32,4 @@ class PadAllDim2(object):
x = paddle.unsqueeze(x, axis=[0, 1])
out = paddle.nn.functional.pad(x=x, pad=pad, **self.layer_attrs)
out = paddle.squeeze(out, axis=[0, 1])
return out
\ No newline at end of file
return out
......@@ -15,6 +15,7 @@
import paddle
from x2paddle.core.util import *
class PadAllDim4(object):
def __init__(self, value, mode):
self.layer_attrs = {}
......@@ -22,16 +23,15 @@ class PadAllDim4(object):
self.layer_attrs['data_format'] = 'NCHW'
self.layer_attrs['value'] = value
def __call__(self, x, pad):
pad = paddle.reshape(pad, shape=[2, -1])
pad = paddle.transpose(pad, perm=[1, 0])
pad = paddle.reverse(pad, axis=[0])
pad = paddle.flatten(pad)
pad = paddle.cast(pad, dtype="int32")
pad = paddle.cast(pad, dtype="int32")
pad1, pad2 = paddle.split(pad, num_or_sections=2, axis=0)
x = paddle.nn.functional.pad(x=x, pad=pad1, **self.layer_attrs)
x = paddle.transpose(x, perm=[2, 3, 0, 1])
x = paddle.nn.functional.pad(x=x, pad=pad2, **self.layer_attrs)
out = paddle.transpose(x, perm=[2, 3, 0, 1])
return out
\ No newline at end of file
return out
......@@ -15,18 +15,19 @@
import paddle
from x2paddle.core.util import *
class PadAllDim4WithOneInput(object):
def __init__(self, pad, value, mode):
self.layer_attrs = {}
self.layer_attrs['mode'] = mode
self.layer_attrs['data_format'] = 'NCHW'
self.layer_attrs['value'] = value
self.pad1 = pad[0: 4]
self.pad2 = pad[4: 9]
self.pad1 = pad[0:4]
self.pad2 = pad[4:9]
def __call__(self, x):
x = paddle.nn.functional.pad(x=x, pad=self.pad1, **self.layer_attrs)
x = paddle.transpose(x, perm=[2, 3, 0, 1])
x = paddle.nn.functional.pad(x=x, pad=self.pad2, **self.layer_attrs)
out = paddle.transpose(x, perm=[2, 3, 0, 1])
return out
\ No newline at end of file
return out
......@@ -15,6 +15,7 @@
import paddle
from x2paddle.core.util import *
class PadWithTwoInput(object):
def __init__(self, value, mode, data_format):
self.layer_attrs = {}
......@@ -22,7 +23,6 @@ class PadWithTwoInput(object):
self.layer_attrs['data_format'] = data_format
self.layer_attrs['value'] = value
def __call__(self, x, pad):
pad = paddle.reshape(pad, shape=[2, -1])
pad = paddle.transpose(pad, perm=[1, 0])
......@@ -30,4 +30,4 @@ class PadWithTwoInput(object):
pad = paddle.flatten(pad)
pad = paddle.cast(pad, dtype="int32")
out = paddle.nn.functional.pad(x=x, pad=pad, **self.layer_attrs)
return out
\ No newline at end of file
return out
......@@ -13,24 +13,22 @@
# limitations under the License.
import sys
from x2paddle.op_mapper.dygraph.onnx2paddle.opset9 import OpSet9
from x2paddle.core.op_mapper import OpMapper
from x2paddle.op_mapper.onnx2paddle.opset9 import OpSet9
from x2paddle.decoder.onnx_decoder import ONNXGraphNode
from x2paddle.core.program import PaddleGraph
class ONNXOpMapper(OpMapper):
class ONNXOpMapper():
def __init__(self, decoder):
super(ONNXOpMapper, self).__init__()
self.support_op_sets = [9, ]
self.default_op_set = 9
self.graph = decoder.graph
self.paddle_graph = PaddleGraph(parent_layer=None, graph_type="dygraph", source_type="onnx")
self.paddle_graph = PaddleGraph(parent_layer=None, source_type="onnx")
self.paddle_graph.outputs = self.graph.output_nodes
self.opset = self.create_opset(decoder)
if not self.op_checker():
raise Exception("Model is not supported yet.")
print("Total nodes: {}".format(
sum([
isinstance(node, ONNXGraphNode)
......@@ -52,7 +50,6 @@ class ONNXOpMapper(OpMapper):
self.paddle_graph.set_name(self.graph.graph_name)
self.paddle_graph.set_parameters(self.opset.weights)
self.paddle_graph.set_inputs_info(self.opset.inputs_info)
def op_checker(self):
unsupported_ops = set()
......@@ -67,8 +64,8 @@ class ONNXOpMapper(OpMapper):
return True
else:
if len(unsupported_ops) > 0:
print("\n========= {} OPs are not supported yet ===========".format(
len(unsupported_ops)))
print("\n========= {} OPs are not supported yet ===========".
format(len(unsupported_ops)))
for op in unsupported_ops:
print("========== {} ============".format(op))
return False
......
......@@ -411,7 +411,6 @@ class OpSet9():
pooled_width = node.get_attr('output_width')
spatial_scale = node.get_attr('spatial_scale')
sampling_ratio = node.get_attr('sampling_ratio')
#dygraph rois_num is necessary
val_rois_shape = val_rois.name + '_shape'
self.paddle_graph.add_layer(
kernel="paddle.shape",
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册