README.md 10.2 KB
Newer Older
G
guru4elephant 已提交
1

D
Dong Daxiang 已提交
2
<img src='https://github.com/PaddlePaddle/PaddleFL/blob/master/docs/source/_static/FL-logo.png' width = "400" height = "160">
D
Dong Daxiang 已提交
3

J
jingqinghe 已提交
4
[DOC](https://paddlefl.readthedocs.io/en/latest/) | [Quick Start](https://paddlefl.readthedocs.io/en/latest/compile_and_intall.html) | [中文](./README_cn.md)
D
Dong Daxiang 已提交
5

G
guru4elephant 已提交
6 7 8 9 10 11
PaddleFL is an open source federated learning framework based on PaddlePaddle. Researchers can easily replicate and compare different federated learning algorithms with PaddleFL. Developers can also benefit from PaddleFL in that it is easy to deploy a federated learning system in large scale distributed clusters. In PaddleFL, serveral federated learning strategies will be provided with application in computer vision, natural language processing, recommendation and so on. Application of traditional machine learning training strategies such as Multi-task learning, Transfer Learning in Federated Learning settings will be provided. Based on PaddlePaddle's large scale distributed training and elastic scheduling of training job on Kubernetes, PaddleFL can be easily deployed based on full-stack open sourced software.

## Federated Learning

Data is becoming more and more expensive nowadays, and sharing of raw data is very hard across organizations. Federated Learning aims to solve the problem of data isolation and secure sharing of data knowledge among organizations. The concept of federated learning is proposed by researchers in Google [1, 2, 3]. 

J
jingqinghe 已提交
12 13 14 15 16 17 18 19
## Overview of PaddleFL

<img src='images/FL-framework.png' width = "1000" height = "320" align="middle"/>

In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL. 

#### A. Federated Learning Strategy

J
jingqinghe 已提交
20
- **Vertical Federated Learning**: Logistic Regression with PrivC[5], Neural Network with MPC [11]
J
jingqinghe 已提交
21 22 23 24 25 26 27 28 29 30 31

- **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6], Secure Aggregation

#### B. Training Strategy

- **Multi Task Learning** [7]

- **Transfer Learning** [8]

- **Active Learning**

J
jingqinghe 已提交
32 33 34 35 36 37
There are mainly two components in PaddleFL: **Data Parallel** and **Federated Learning with MPC (PFM)**.

With Data Parallel, distributed data holders can finish their Federated Learning tasks based on common horizontal federated strategies, such as FedAvg, DPSGD, etc.

Besides, PFM is implemented based on secure multi-party computation (MPC) to enable secure training and prediction. As a key product of PaddleFL, PFM intrinsically supports federated learning well, including horizontal, vertical and transfer learning scenarios. Users with little cryptography expertise can also train models or conduct prediction on encrypted data.

J
jingqinghe 已提交
38
## Installation
J
jingqinghe 已提交
39

J
jingqinghe 已提交
40
We **highly recommend** to run PaddleFL in Docker 
J
jingqinghe 已提交
41 42 43

```sh
#Pull and run the docker
J
jingqinghe 已提交
44
docker pull paddlepaddle/paddlefl:latest
J
jingqinghe 已提交
45
docker run --name <docker_name> --net=host -it -v $PWD:/paddle <image id> /bin/bash
J
jingqinghe 已提交
46 47 48 49 50

#Install paddle_fl
pip install paddle_fl
```

J
jingqinghe 已提交
51
If you want to compile and install from source code, please click [here](./docs/source/md/compile_and_install.md) to get instructions. 
J
jingqinghe 已提交
52

J
jingqinghe 已提交
53
We also prepare a stable redis package for you to download and install, which will be used in tasks with MPC. 
J
jingqinghe 已提交
54

J
jingqinghe 已提交
55 56 57 58 59
```sh
wget --no-check-certificate https://paddlefl.bj.bcebos.com/redis-stable.tar
tar -xf redis-stable.tar
cd redis-stable &&  make
```
J
jingqinghe 已提交
60

J
jingqinghe 已提交
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76
## Easy deployment with kubernetes

### Data Parallel
```sh

kubectl apply -f ./python/paddle_fl/paddle_fl/examples/k8s_deployment/master.yaml

```
Please refer [K8S deployment example](./python/paddle_fl/paddle_fl/examples/k8s_deployment/README.md) for details

You can also refer [K8S cluster application and kubectl installation](./python/paddle_fl/paddle_fl/examples/k8s_deployment/deploy_instruction.md) to deploy your K8S cluster

### Federated Learning with MPC

To be added.

G
guru4elephant 已提交
77 78
## Framework design of PaddleFL

J
jingqinghe 已提交
79
### Data Parallel
J
jingqinghe 已提交
80

J
jingqinghe 已提交
81
<img src='images/FL-training.png' width = "1000" height = "400" align="middle"/>
G
guru4elephant 已提交
82

J
jingqinghe 已提交
83
In Data Parallel, components for defining a federated learning task and training a federated learning job are as follows:
G
guru4elephant 已提交
84

J
jingqinghe 已提交
85
#### A. Compile Time
G
guru4elephant 已提交
86

J
jingqinghe 已提交
87
- **FL-Strategy**: a user can define federated learning strategies with FL-Strategy such as Fed-Avg[2]
G
guru4elephant 已提交
88 89 90 91 92 93 94

- **User-Defined-Program**: PaddlePaddle's program that defines the machine learning model structure and training strategies such as multi-task learning.

- **Distributed-Config**: In federated learning, a system should be deployed in distributed settings. Distributed Training Config defines distributed training node information.

- **FL-Job-Generator**: Given FL-Strategy, User-Defined Program and Distributed Training Config, FL-Job for federated server and worker will be generated through FL Job Generator. FL-Jobs will be sent to organizations and federated parameter server for run-time execution.

J
jingqinghe 已提交
95
#### B. Run Time
G
guru4elephant 已提交
96 97 98 99 100

- **FL-Server**: federated parameter server that usually runs in cloud or third-party clusters.

- **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server.

J
jingqinghe 已提交
101
- **FL-scheduler**: Decide which set of trainers can join the training before each updating cycle.
Q
qjing666 已提交
102

J
jingqinghe 已提交
103
For more instructions, please refer to the [examples](./python/paddle_fl/paddle_fl/examples)
J
jingqinghe 已提交
104

J
jed 已提交
105
### Federated Learning with MPC
J
jingqinghe 已提交
106

J
jingqinghe 已提交
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124
<img src='images/PFM-overview.png' width = "1000" height = "446" align="middle"/>

Paddle FL MPC implements secure training and inference tasks based on the underlying MPC protocol like ABY3[11], which is a high efficient three-party computing model.

In ABY3, participants can be classified into roles of Input Party (IP), Computing Party (CP) and Result Party (RP). Input Parties (e.g., the training data/model owners) encrypt and distribute data or models to Computing Parties. Computing Parties (e.g., the VM on the cloud) conduct training or inference tasks based on specific MPC protocols, being restricted to see only the encrypted data or models, and thus guarantee the data privacy. When the computation is completed, one or more Result Parties (e.g., data owners or specified third-party) receive the encrypted results from Computing Parties, and reconstruct the plaintext results. Roles can be overlapped, e.g., a data owner can also act as a computing party. 

A full training or inference process in PFM consists of mainly three phases: data preparation, training/inference, and result reconstruction.

#### A. Data preparation

- **Private data alignment**: PFM enables data owners (IPs) to find out records with identical keys (like UUID) without revealing private data to each other. This is especially useful in the vertical learning cases where segmented features with same keys need to be identified and aligned from all owners in a private manner before training.

- **Encryption and distribution**: In PFM, data and models from IPs will be encrypted using Secret-Sharing[10], and then be sent to CPs, via directly transmission or distributed storage like HDFS. Each CP can only obtain one share of each piece of data, and thus is unable to recover the original value in the Semi-honest model.

#### B. Training/inference

A PFM program is exactly a PaddlePaddle program, and will be executed as normal PaddlePaddle programs. Before training/inference, user needs to choose a MPC protocol, define a machine learning model and their training strategies. Typical machine learning operators are provided in `paddle_fl.mpc` over encrypted data, of which the instances are created and run in order by Executor during run-time.

J
jingqinghe 已提交
125
For more information of Training/inference phase, please refer to the following [doc](./docs/source/md/mpc_train.md).
J
jingqinghe 已提交
126 127 128 129 130 131 132

#### C. Result reconstruction

Upon completion of the secure training (or inference) job, the models (or prediction results) will be output by CPs in encrypted form. Result Parties can collect the encrypted results, decrypt them using the tools in PFM, and deliver the plaintext results to users.

For more instructions, please refer to [mpc examples](./python/paddle_fl/mpc/examples) 

G
guru4elephant 已提交
133 134
## Benchmark task

J
jingqinghe 已提交
135
### Data Parallel 
J
jingqinghe 已提交
136

D
Dong Daxiang 已提交
137
Gru4Rec [9] introduces recurrent neural network model in session-based recommendation. PaddlePaddle's Gru4Rec implementation is in https://github.com/PaddlePaddle/models/tree/develop/PaddleRec/gru4rec. An example is given in [Gru4Rec in Federated Learning](https://paddlefl.readthedocs.io/en/latest/examples/gru4rec_examples.html)
J
jingqinghe 已提交
138

J
jed 已提交
139
### Federated Learning with MPC 
J
jingqinghe 已提交
140

J
jingqinghe 已提交
141
We conduct tests on PFM using Boston house price dataset, and the implementation is given in [uci_demo](./python/paddle_fl/mpc/examples/uci_demo)
G
guru4elephant 已提交
142

J
jingqinghe 已提交
143
## On Going and Future Work
G
guru4elephant 已提交
144

J
jingqinghe 已提交
145 146
- Vertial Federated Learning will support more algorithms.

J
jingqinghe 已提交
147
- Add K8S deployment scheme for PFM.
G
guru4elephant 已提交
148

J
jingqinghe 已提交
149 150
- FL mobile simulator will be open sourced in following versions.

G
guru4elephant 已提交
151 152 153 154 155 156 157 158 159 160
## Reference

[1]. Jakub Konečný, H. Brendan McMahan, Daniel Ramage, Peter Richtárik. **Federated Optimization: Distributed Machine Learning for On-Device Intelligence.** 2016

[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agüera y Arcas. **Federated Learning of Deep Networks using Model Averaging.** 2017

[3]. Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016

[4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019

J
jed 已提交
161
[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC  - A framework for efficient Secure Two-Party Computation.** In Proc. of SecureComm 2019
G
guru4elephant 已提交
162 163 164 165 166 167 168

[6]. Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. **Deep Learning with Differential Privacy.** 2016

[7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016

[8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018

D
Dong Daxiang 已提交
169
[9]. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk. **Session-based Recommendations with Recurrent Neural Networks.** 2016
J
jed 已提交
170 171 172 173

[10]. https://en.wikipedia.org/wiki/Secret_sharing

[11]. Payman Mohassel and Peter Rindal. **ABY3: A Mixed Protocol Framework for Machine Learning.** In Proc. of CCS 2018