提交 dc29a17c 编写于 作者: J jhjiangcs

improve doc and fix bugs (data type).

上级 538d0bfd
......@@ -105,5 +105,4 @@ REGISTER_OPERATOR(
ops::MpcSGDOpInferVarType);
REGISTER_OP_CPU_KERNEL(
mpc_sgd,
ops::MpcSGDOpKernel<paddle::platform::CPUDeviceContext, int64_t, float>,
ops::MpcSGDOpKernel<paddle::platform::CPUDeviceContext, int64_t, double>);
ops::MpcSGDOpKernel<paddle::platform::CPUDeviceContext, int64_t, float>);
......@@ -299,8 +299,8 @@ def transpile(program=None):
# process initialized params that should be 0
set_tensor_value = np.array([param_tensor, param_tensor]).astype(np.int64)
param.get_tensor().set(set_tensor_value, place)
else:
param.get_tensor().set(np.array(param.get_tensor()).astype('float64'), place)
#else:
# param.get_tensor().set(np.array(param.get_tensor()).astype('float64'), place)
# trigger sync to replace old ops.
op_num = global_block.desc.op_size()
......@@ -327,7 +327,7 @@ def _transpile_type_and_shape(block):
if var.name != "feed" and var.name != "fetch":
mpc_vars_names.add(var.name)
if var_name in plain_vars:
var.desc.set_dtype(fluid.framework.convert_np_dtype_to_dtype_(np.float64))
# var.desc.set_dtype(fluid.framework.convert_np_dtype_to_dtype_(np.float64))
continue
# set mpc param shape = [2, old_shape]
encrypted_var_shape = (ABY3_SHARE_DIM,) + var.shape
......@@ -389,8 +389,8 @@ def encrypt_model(program, mpc_model_dir=None, model_filename=None):
param.get_tensor()._set_dims(mpc_var.shape)
set_tensor_value = get_aby3_shares(param_tensor_shares, idx)
param.get_tensor().set(set_tensor_value, place)
else:
param.get_tensor().set(np.array(param.get_tensor()).astype('float64'), place)
#else:
# param.get_tensor().set(np.array(param.get_tensor()).astype('float64'), place)
param_share_dir = os.path.join(
mpc_model_dir, MODEL_SHARE_DIR + "_" + str(idx))
......@@ -468,7 +468,7 @@ def decrypt_model(mpc_model_dir, plain_model_path, mpc_model_filename=None, plai
else:
plain_var_shape = mpc_var.shape[1:]
mpc_var.desc.set_shape(plain_var_shape)
mpc_var.desc.set_dtype(fluid.framework.convert_np_dtype_to_dtype_(np.float32))
#mpc_var.desc.set_dtype(fluid.framework.convert_np_dtype_to_dtype_(np.float32))
# remove init op
first_mpc_op = global_block.ops[0]
......
......@@ -10,9 +10,9 @@ This document introduces how to run encrypt PaddlePaddle model, then train or u
Model encryption demo contains three scenarios:
* **Encrypt Model and Train**
* **Transpile Model and Train**
Each party loads PaddlePadlde model and encrypts it. Each party feeds the encrypted data to train the encrypted model. Each party can get one share for the encrypted model. PaddlePaddle model can be reconstructed with three encrypted model shares.
Each party loads an empty PaddlePadlde model and transpile it into encrypted and empty model. Each party feeds encrypted data to train the encrypted model. Each party can get one share for the encrypted model. PaddlePaddle model can be reconstructed with three encrypted model shares.
* **Encrypt Pre-trained Model and Update**
......@@ -24,7 +24,7 @@ Pre-trained model is encryption and distributed to multipel parties. Parties pre
### 3. Usage
#### 3.1 Train Model
#### 3.1 Train a New Model
<img src='images/model_training.png' width = "500" height = "550" align="middle"/>
......@@ -38,7 +38,7 @@ This figure shows model encryption and training with Paddle-MPC.
exe.run(fluid.default_startup_program())
```
2). **Transpile(Encrypt) Model**: Users use api `aby3.transpile` to encrypt curent default PaddlePaddle model.
2). **Transpile Model**: Users use api `aby3.transpile` to encrypt curent PaddlePaddle model to encrypted model.
```python
aby3.transpile()
......@@ -101,7 +101,7 @@ This figure shows how to update pre-trained model with Paddle-MPC.
4). **Decrypt Model**:User can decrypt model with three model shares.
#### 3.3 Predict Model
#### 3.3 Model Inference
<img src='images/model_infer.png' width = "500" height = "380" align="middle"/>
......
......@@ -12,7 +12,7 @@
* **模型加密后训练**
多方用户使用各自数据联合进行已有模型的训练。在该场景下,各方用户可直接加载模型库中的网络模型或自定义的网络模型并对模型加密,各方使用密文训练数据联合进行密文模型的训练和保存。训练完成后,各方只拥有密文模型,即明文模型的分片,在需要时可以基于模型分片解密恢复出完整的明文模型。
多方用户使用各自数据联合进行已有空白模型的训练。在该场景下,各方用户可直接加载模型库中的空白网络模型或自定义的空白网络模型并对其进行加密,各方使用密文训练数据联合进行密文空白模型的训练和保存。训练完成后,各方只拥有密文模型,即明文模型的分片,在需要时可以基于模型分片解密恢复出完整的明文模型。
* **预训练模型加密后再更新**
......@@ -24,7 +24,7 @@
### 3. 使用方法
#### 3.1 模型训练
#### 3.1 加密训练新模型
<img src='images/model_training.png' width = "500" height = "550" align="middle"/>
......
......@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This module provides networks.
This module provides a linear regression network.
"""
import paddle
import paddle.fluid as fluid
......
......@@ -2,15 +2,15 @@
([简体中文](./README_CN.md)|English)
This document introduces how to encrypt plaintext model and train the encrypted model based on Paddle-MPC.
This document introduces how to transpile empty PaddlePaddle model and train the encrypted model based on Paddle-MPC.
### 1. Prepare Data
Run script `../process_data.py` to generate encrypted training and testing data.
### 2. Encrypt Model, Train, and Save
### 2. Transpile Model, Train, and Save
Encrypt plaintext PaddlePaddle model, train the encrypted model, and save the trained encrypted model with the following script.
Transpile empty PaddlePaddle model into encrypted and empty model, train the encrypted model, and save the trained encrypted model with the following script.
```bash
bash run_standalone.sh encrypt_and_train_model.py
......
......@@ -2,13 +2,13 @@
(简体中文|[English](./README.md))
本示例简单介绍了基于PaddleFL-MPC对明文模型加密后再训练的使用说明。
本示例简单介绍了基于PaddleFL-MPC对明文空白模型加密后再训练的使用说明。
### 1. 准备加密数据
执行脚本`../process_data.py`完成训练数据的加密处理。
### 2. 训练明文模型并加密保存
### 2. 加密空白明文模型并训练保存
使用如下命令完成模型的加密、训练与保存:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册