提交 b4b95fb7 编写于 作者: C channingss

fix markdown style

上级 71ab0cfc
## 常见问题
**Q1. TensorFlow模型转换过程中,提示『Unknown shape for input tensor[tensor name: "input"], Please define shape of input here』?**
A:该提示信息表示无法从TensorFlow的pb模型中获取到输入tensor(tensor名为"input:)的shape信息,所以需要用户手动在提示后输入详细的shape信息,如None,224,224,3 其中None表示Batch
A:该提示信息表示无法从TensorFlow的pb模型中获取到输入tensor(tensor名为"input:)的shape信息,所以需要用户手动在提示后输入详细的shape信息,如None,224,224,3 其中None表示Batch
**Q2. TensorFlow模型转换失败怎么解决?**
......
## 如何转换Caffe自定义Layer
本文档介绍如何将Caffe自定义Layer转换为PaddlePaddle模型中的对应实现, 用户可根据自己需要,添加代码实现自定义层,从而支持模型的完整转换。
***步骤一 下载代码***
本文档介绍如何将Caffe自定义Layer转换为PaddlePaddle模型中的对应实现, 用户可根据自己需要,添加代码实现自定义层,从而支持模型的完整转换。
***步骤一 下载代码***
此处涉及修改源码,应先卸载x2paddle,并且下载源码,主要有以下两步完成:
```
pip uninstall x2paddle
pip install git+https://github.com/PaddlePaddle/X2Paddle.git@develop
```
***步骤二 编译caffe.proto***
***步骤二 编译caffe.proto***
该步骤依赖protobuf编译器,其安装过程有以下两种方式:
> 选择一:pip install protobuf
> 选择一:pip install protobuf
> 选择二:使用[官方源码](https://github.com/protocolbuffers/protobuf)进行编译
使用脚本./tools/compile.sh将caffe.proto(包含所需的自定义Layer信息)编译成我们所需的目标语言(Python)
使用脚本./tools/compile.sh将caffe.proto(包含所需的自定义Layer信息)编译成我们所需的目标语言(Python)
使用方式:
```
bash ./toos/compile.sh /home/root/caffe/src/caffe/proto
......@@ -25,14 +25,14 @@ bash ./toos/compile.sh /home/root/caffe/src/caffe/proto
- 仿照./x2paddle/op_mapper/caffe_custom_layer中的其他文件,在mylayer.py中主要需要实现3个函数,下面以roipooling.py为例分析代码:
1. `def roipooling_shape(input_shape, pooled_w=None, pooled_h=None)`
参数:
1. input_shape(list):其中每个元素代表该层每个输入数据的shape,为必须传入的参数
1. input_shape(list):其中每个元素代表该层每个输入数据的shape,为必须传入的参数
2. pooled_w(int):代表ROI Pooling的kernel的宽,其命名与.prototxt中roi_pooling_param中的key一致
3. pooled_h(int):代表ROI Pooling的kernel的高,其命名与.prototxt中roi_pooling_param中的key一致
3. pooled_h(int):代表ROI Pooling的kernel的高,其命名与.prototxt中roi_pooling_param中的key一致
功能:计算出进行ROI Pooling后的shape
返回:一个list,其中每个元素代表每个输出数据的shape,由于ROI Pooling的输出数据只有一个,所以其list长度为1
功能:计算出进行ROI Pooling后的shape
返回:一个list,其中每个元素代表每个输出数据的shape,由于ROI Pooling的输出数据只有一个,所以其list长度为1
2. `def roipooling_layer(inputs, input_shape=None, name=None, pooled_w=None, pooled_h=None, spatial_scale=None)`
2. `def roipooling_layer(inputs, input_shape=None, name=None, pooled_w=None, pooled_h=None, spatial_scale=None)`
参数:
1. inputs(list):其中每个元素代表该层每个输入数据,为必须传入的参数
......@@ -40,9 +40,9 @@ bash ./toos/compile.sh /home/root/caffe/src/caffe/proto
3. name(str):ROI Pooling层的名字,为必须传入的参数
4. pooled_w(int):代表ROI Pooling的kernel的宽,其命名与.prototxt中roi_pooling_param中的key一致
5. pooled_h(int):代表ROI Pooling的kernel的高,其命名与.prototxt中roi_pooling_param中的key一致
6. spatial_scale(float):用于将ROI坐标从输入比例转换为池化时使用的比例,其命名与.prototxt中roi_pooling_param中的key一致
6. spatial_scale(float):用于将ROI坐标从输入比例转换为池化时使用的比例,其命名与.prototxt中roi_pooling_param中的key一致
功能:运用PaddlePaddle完成组网来实现`roipooling_layer`的功能
功能:运用PaddlePaddle完成组网来实现`roipooling_layer`的功能
返回:一个Variable,为组网后的结果
3. `def roipooling_weights(name, data=None)`
......@@ -51,7 +51,7 @@ bash ./toos/compile.sh /home/root/caffe/src/caffe/proto
1. name(str):ROI Pooling层的名字,为必须传入的参数
2. data(list):由Caffe模型.caffemodel获得的关于roipooling的参数,roipooling的参数为None
功能:为每个参数(例如kernel、bias等)命名;同时,若Caffe中该层参数与PaddlePaddle中参数的格式不一致,则变换操作也在该函数中实现。
功能:为每个参数(例如kernel、bias等)命名;同时,若Caffe中该层参数与PaddlePaddle中参数的格式不一致,则变换操作也在该函数中实现。
返回:一个list,包含每个参数的名字。
- 在roipooling.py中注册`roipooling`,主要运用下述代码实现:
......@@ -70,9 +70,9 @@ bash ./toos/compile.sh /home/root/caffe/src/caffe/proto
# 在X2Paddle目录下安装x2paddle
python setup.py install
# 运行转换代码
x2paddle --framework=caffe
--prototxt=deploy.proto
--weight=deploy.caffemodel
--save_dir=pd_model
x2paddle --framework=caffe
--prototxt=deploy.proto
--weight=deploy.caffemodel
--save_dir=pd_model
--caffe_proto=/home/root/caffe/src/caffe/proto/caffe_pb2.py
```
......@@ -10,5 +10,3 @@
| Normalize | [code](https://github.com/weiliu89/caffe/blob/ssd/src/caffe/layers/normalize_layer.cpp) |
| ROIPooling | [code](https://github.com/rbgirshick/caffe-fast-rcnn/blob/0dcd397b29507b8314e252e850518c5695efbb83/src/caffe/layers/roi_pooling_layer.cpp) |
| Axpy | [code](https://github.com/hujie-frank/SENet/blob/master/src/caffe/layers/axpy_layer.cpp) |
......@@ -28,7 +28,7 @@ def freeze_model(sess, output_tensor_names, freeze_model_path):
f.write(out_graph.SerializeToString())
print("freeze model saved in {}".format(freeze_model_path))
# 加载模型参数
sess = tf.Session()
inputs = tf.placeholder(dtype=tf.float32,
......
......@@ -7,7 +7,7 @@ function abort(){
trap 'abort' 0
set -e
TRAVIS_BUILD_DIR=${PWD}
cd $TRAVIS_BUILD_DIR
export PATH=/usr/bin:$PATH
pre-commit install
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -135,7 +135,8 @@ class CaffeOpMapper(OpMapper):
if isinstance(params.kernel_size, numbers.Number):
[k_h, k_w] = [params.kernel_size] * 2
elif len(params.kernel_size) > 0:
k_h = params.kernel_h if params.kernel_h > 0 else params.kernel_size[0]
k_h = params.kernel_h if params.kernel_h > 0 else params.kernel_size[
0]
k_w = params.kernel_w if params.kernel_w > 0 else params.kernel_size[
len(params.kernel_size) - 1]
elif params.kernel_h > 0 or params.kernel_w > 0:
......@@ -156,8 +157,8 @@ class CaffeOpMapper(OpMapper):
[p_h, p_w] = [params.pad] * 2
elif len(params.pad) > 0:
p_h = params.pad_h if params.pad_h > 0 else params.pad[0]
p_w = params.pad_w if params.pad_w > 0 else params.pad[len(params.pad) -
1]
p_w = params.pad_w if params.pad_w > 0 else params.pad[
len(params.pad) - 1]
elif params.pad_h > 0 or params.pad_w > 0:
p_h = params.pad_h
p_w = params.pad_w
......@@ -225,12 +226,17 @@ class CaffeOpMapper(OpMapper):
node.layer_type, params)
if data is None:
data = []
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format(
node.layer_name, node.layer_type))
print(
'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1]
output_c = channel
data.append(np.zeros([output_c, input_c, kernel[0], kernel[1]]).astype('float32'))
data.append(np.zeros([output_c,])).astype('float32')
data.append(
np.zeros([output_c, input_c, kernel[0],
kernel[1]]).astype('float32'))
data.append(np.zeros([
output_c,
])).astype('float32')
else:
data = self.adjust_parameters(node)
self.weights[node.layer_name + '_weights'] = data[0]
......@@ -272,12 +278,17 @@ class CaffeOpMapper(OpMapper):
node.layer_type, params)
if data is None:
data = []
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format(
node.layer_name, node.layer_type))
print(
'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1]
output_c = channel
data.append(np.zeros([output_c, input_c, kernel[0], kernel[1]]).astype('float32'))
data.append(np.zeros([output_c,]).astype('float32'))
data.append(
np.zeros([output_c, input_c, kernel[0],
kernel[1]]).astype('float32'))
data.append(np.zeros([
output_c,
]).astype('float32'))
else:
data = self.adjust_parameters(node)
self.weights[node.layer_name + '_weights'] = data[0]
......@@ -369,13 +380,17 @@ class CaffeOpMapper(OpMapper):
data = node.data
params = node.layer.inner_product_param
if data is None:
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0.'.format(
node.layer_name, node.layer_type))
print(
'The parameter of {} (type is {}) is not set. So we set the parameters as 0.'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1]
output_c = params.num_output
data = []
data.append(np.zeros([input_c, output_c]).astype('float32').astype('float32'))
data.append(np.zeros([output_c]).astype('float32').astype('float32'))
data.append(
np.zeros([input_c,
output_c]).astype('float32').astype('float32'))
data.append(
np.zeros([output_c]).astype('float32').astype('float32'))
else:
data = self.adjust_parameters(node)
# Reshape the parameters to Paddle's ordering
......@@ -467,7 +482,7 @@ class CaffeOpMapper(OpMapper):
node.layer_name, node.layer_name + '_' + str(i)))
if i == len(points) - 2:
break
def Concat(self, node):
assert len(
node.inputs
......@@ -616,7 +631,8 @@ class CaffeOpMapper(OpMapper):
param_attr=attr)
def BatchNorm(self, node):
assert len(node.inputs) == 1, 'The count of BatchNorm node\'s input is not 1.'
assert len(
node.inputs) == 1, 'The count of BatchNorm node\'s input is not 1.'
input = self.graph.get_bottom_node(node, idx=0, copy=True)
params = node.layer.batch_norm_param
if hasattr(params, 'eps'):
......@@ -624,11 +640,16 @@ class CaffeOpMapper(OpMapper):
else:
eps = 1e-5
if node.data is None or len(node.data) != 3:
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format(
node.layer_name, node.layer_type))
print(
'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1]
mean = np.zeros([input_c,]).astype('float32')
variance = np.zeros([input_c,]).astype('float32')
mean = np.zeros([
input_c,
]).astype('float32')
variance = np.zeros([
input_c,
]).astype('float32')
scale = 0
else:
node.data = [np.squeeze(i) for i in node.data]
......@@ -655,11 +676,16 @@ class CaffeOpMapper(OpMapper):
def Scale(self, node):
if node.data is None:
print('The parameter of {} (type is {}) is not set. So we set the parameters as 0'.format(
node.layer_name, node.layer_type))
print(
'The parameter of {} (type is {}) is not set. So we set the parameters as 0'
.format(node.layer_name, node.layer_type))
input_c = node.input_shape[0][1]
self.weights[node.layer_name + '_scale'] = np.zeros([input_c,]).astype('float32')
self.weights[node.layer_name + '_offset'] = np.zeros([input_c,]).astype('float32')
self.weights[node.layer_name + '_scale'] = np.zeros([
input_c,
]).astype('float32')
self.weights[node.layer_name + '_offset'] = np.zeros([
input_c,
]).astype('float32')
else:
self.weights[node.layer_name + '_scale'] = np.squeeze(node.data[0])
self.weights[node.layer_name + '_offset'] = np.squeeze(node.data[1])
......
......@@ -43,7 +43,8 @@ def get_kernel_parameters(params):
[p_h, p_w] = [params.pad] * 2
elif len(params.pad) > 0:
p_h = params.pad_h if params.pad_h > 0 else params.pad[0]
p_w = params.pad_w if params.pad_w > 0 else params.pad[len(params.pad) - 1]
p_w = params.pad_w if params.pad_w > 0 else params.pad[len(params.pad) -
1]
elif params.pad_h > 0 or params.pad_w > 0:
p_h = params.pad_h
p_w = params.pad_w
......
......@@ -65,4 +65,3 @@
| mNASNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9|
| EfficientNet | [pytorch(personal practice)](https://github.com/rwightman/gen-efficientnet-pytorch) |9|
| SqueezeNet | [onnx official](https://s3.amazonaws.com/download.onnx/models/opset_9/squeezenet.tar.gz) |9|
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册