未验证 提交 5c143edc 编写于 作者: S SunAhong1993 提交者: GitHub

Update README_en.md

上级 547bd6ed
### Caffe2Fluid # caffe2fluid
This tool is used to convert a Caffe model to a Fluid model [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
### Key Features This tool is used to convert a Caffe model to a Fluid model. In the [[doc](doc/ReadMe.md)] directory, the common APIs of Caffe-PaddlePaddle are compared and analyzed.
1. Convert caffe model to fluid model with codes of defining a network(useful for re-training)
## Prerequisites
2. Pycaffe is not necessary when just want convert model without do caffe-inference
> python >= 2.7
3. Caffe's customized layers convertion also be supported by extending this tool > numpy
> protobuf >= 3.6.0
4. A bunch of tools in `examples/imagenet/tools` are provided to compare the difference > future
### HowTo **The running process of caffe2fluid only relies on above conditions.**
1. Prepare `caffepb.py` in `./proto` if your python has no `pycaffe` module, two options provided here: It is recommended to install the Caffe and PaddlePaddle in the environment for testing after converting the model. For environmental installation, please refer to [Installation Documentation](prepare.md)
- Generate pycaffe from caffe.proto
``` ## HowTo
bash ./proto/compile.sh
``` ### Model Conversion
1. Convert the Caffe's model to the PaddlePaddle's model code and parameter file (The parameters are saved as the form of numpy).
- Download one from github directly
``` ```
cd proto/ && wget https://raw.githubusercontent.com/ethereon/caffe-tensorflow/master/kaffe/caffe/caffepb.py # --def_path : The path of Caffe's configuration file
``` # --caffemodel : The save path of Caffe's model file
# --data-output-path : The save path of the model after converting
2. Convert the Caffe model to Fluid model # --code-output-path : The save path of the model code after converting
- Generate fluid code and weight file python convert.py --def_path alexnet.prototxt \
``` --caffemodel alexnet.caffemodel \
python convert.py alexnet.prototxt \ --data-output-path alexnet.npy \
--caffemodel alexnet.caffemodel \ --code-output-path alexnet.py
--data-output-path alexnet.npy \ ```
--code-output-path alexnet.py
``` 2. The model network structure and parameters can be serialized as the model format supported by the PaddlePaddle framework.
```
- Save weights as fluid model file # --model-param-path : The save path of PaddlePaddle's serialized model
``` python alexnet.py --npy_path alexnet.npy --model-param-path ./fluid_model
# only infer the last layer's result ```
python alexnet.py alexnet.npy ./fluid Or you can specify the output of the saved model when saving.
# infer these 2 layer's result ```
python alexnet.py alexnet.npy ./fluid fc8,prob # The output of model is the fc8 layer and prob layer.
``` python alexnet.py --npy_path alexnet.npy --model-param-path ./fluid --need-layers-name fc8,prob
```
3. Use the converted model to infer Model loading and prediction can refer to the [official PaddlePaddle document](http://www.paddlepaddle.org/documentation/docs/en/1.3/api_guides/low_level/inference_en.html).
- See more details in `examples/imagenet/tools/run.sh`
### Comparison of differences before and after model conversion
4. Compare the inference results with caffe After the model is converted, the difference between the converted model and the original model can be compared layer by layer (**the running environment depends on caffe and paddlepaddle**)
- See more details in `examples/imagenet/tools/diff.sh` ```
# alexnet : The value of "name" in the Caffe's configuration file (.prototxt)
### How to convert custom layer # ../../alexnet.prototxt : The path of Caffe's configuration file
# ../../alexnet.caffemodel : The save path of Caffe's model file
# ../../alexnet.py : The save path of the model after converting
# ../../alexnet.npy : The save path of the model code after converting
# ./data/65.jpeg : The path of image which is need to reference
cd examples/imagenet
bash tools/diff.sh alexnet ../../alexnet.prototxt \
../../alexnet.caffemodel \
../../alexnet.py \
../../alexnet.npy \
./data/65.jpeg
```
## How to convert custom layer
In the model conversion, when encounter an unsupported custom layer, users can add code to achieve a custom layer according to their needs. thus supporting the complete conversion of the model. The implementation is the following process.
1. Implement your custom layer in a file under `kaffe/custom_layers`, eg: mylayer.py 1. Implement your custom layer in a file under `kaffe/custom_layers`, eg: mylayer.py
- Implement ```shape_func(input_shape, [other_caffe_params])``` to calculate the output shape - Implement ```shape_func(input_shape, [other_caffe_params])``` to calculate the output shape
- Implement ```layer_func(inputs, name, [other_caffe_params])``` to construct a fluid layer - Implement ```layer_func(inputs, name, [other_caffe_params])``` to construct a fluid layer
...@@ -65,9 +82,8 @@ This tool is used to convert a Caffe model to a Fluid model ...@@ -65,9 +82,8 @@ This tool is used to convert a Caffe model to a Fluid model
export CAFFE2FLUID_CUSTOM_LAYERS=/path/to/caffe2fluid/kaffe export CAFFE2FLUID_CUSTOM_LAYERS=/path/to/caffe2fluid/kaffe
``` ```
6. Use the converted model when loading model in `xxxnet.py` and `xxxnet.npy`(no need if model is already in `fluid/model` and `fluid/params`)
### Tested models ### Tested models
The caffe2fluid passed the test on the following model:
- Lenet: - Lenet:
[model addr](https://github.com/ethereon/caffe-tensorflow/blob/master/examples/mnist) [model addr](https://github.com/ethereon/caffe-tensorflow/blob/master/examples/mnist)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册