未验证 提交 21f0c25d 编写于 作者: T Thomas Young 提交者: GitHub

add encryption

add encryption
上级 350a4004
......@@ -8,48 +8,45 @@ Paddle Serving provides model encryption inference, This document shows the deta
We use symmetric encryption algorithm to encrypt the model. Symmetric encryption algorithm uses the same key for encryption and decryption, it has small amount of calculation, fast speed, is the most commonly used encryption method.
### Got an encrypted model
First of all, you got have a key for encryption.
### Got an Encrypted Model
Normal model and parameters can be understood as a string, by using the encryption algorithm (parameter is your key) on them, the normal model and parameters become an encrypted one.
We provide a simple demo to encrypt the model. See the file
https://github.com/HexToString/Serving/blob/develop/python/examples/encryption/README.md
We provide a simple demo to encrypt the model. See the [document](../python/examples/encryption/)
### Start Encryption Service
Suppose you already have an encrypted model(in the `encrypt_server/`),you can start the encryption model service by adding an additional command line parameter `--use_encryption_model`
CPU Service
```
python -m paddle_serving_server.serve --model encrypt_server/ --port 9300 --use_encryption_model
```
- Java 8 or higher
- Apache Maven
GPU Service
```
python -m paddle_serving_server_gpu.serve --model encrypt_server/ --port 9300 --use_encryption_model --gpu_ids 0
```
The following table shows compatibilities between Paddle Serving Server and Java SDK.
At this point, the server does not really start, but waits for the key。
| Paddle Serving Server version | Java SDK version |
| :---------------------------: | :--------------: |
| 0.3.2 | 0.0.1 |
### Client Encryption Inference
1、Directly use the provided Java SDK as the client for prediction
### Install Java SDK
First of all, you got have the key which is used in the process of model encryption.
You can download jar and install it to the local Maven repository:
Then you can configure your client with the key, when you connect the server, this key will send to the server and the server will keep it.
```shell
wget https://paddle-serving.bj.bcebos.com/jar/paddle-serving-sdk-java-0.0.1.jar
mvn install:install-file -Dfile=$PWD/paddle-serving-sdk-java-0.0.1.jar -DgroupId=io.paddle.serving.client -DartifactId=paddle-serving-sdk-java -Dversion=0.0.1 -Dpackaging=jar
```
Once the server gets the key, it uses the key to parse the model and starts the model prediction service.
### Maven configure
```text
<dependency>
<groupId>io.paddle.serving.client</groupId>
<artifactId>paddle-serving-sdk-java</artifactId>
<version>0.0.1</version>
</dependency>
```
### Example of Model Encryption Inference
Example of model encryption inference, See the [`/python/examples/encryption/`](../python/examples/encryption/)
### Other Details
Interface of encryption method in paddlepaddle official website:
2、Use it after compiling from the source code. See the file:
https://github.com/PaddlePaddle/Serving/blob/develop/java/README_CN.md
[Python encryption method](https://github.com/HexToString/Serving/blob/develop/python/paddle_serving_app/local_predict.py)
3、examples for using the java client, see the file:
https://github.com/PaddlePaddle/Serving/blob/develop/java/README_CN.md
[C++ encryption method](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/inference_deployment/inference/python_infer_cn.html#analysispre)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册