[DOC](https://paddlefl.readthedocs.io/en/latest/) | [Quick Start](https://paddlefl.readthedocs.io/en/latest/instruction.html) | [中文](./README_cn.md) PaddleFL is an open source federated learning framework based on PaddlePaddle. Researchers can easily replicate and compare different federated learning algorithms with PaddleFL. Developers can also benefit from PaddleFL in that it is easy to deploy a federated learning system in large scale distributed clusters. In PaddleFL, serveral federated learning strategies will be provided with application in computer vision, natural language processing, recommendation and so on. Application of traditional machine learning training strategies such as Multi-task learning, Transfer Learning in Federated Learning settings will be provided. Based on PaddlePaddle's large scale distributed training and elastic scheduling of training job on Kubernetes, PaddleFL can be easily deployed based on full-stack open sourced software. ## Federated Learning Data is becoming more and more expensive nowadays, and sharing of raw data is very hard across organizations. Federated Learning aims to solve the problem of data isolation and secure sharing of data knowledge among organizations. The concept of federated learning is proposed by researchers in Google [1, 2, 3]. ## Overview of PaddleFL In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL. #### A. Federated Learning Strategy - **Vertical Federated Learning**: Logistic Regression with PrivC[5], Neural Network with MPC [11] - **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6], Secure Aggregation #### B. Training Strategy - **Multi Task Learning** [7] - **Transfer Learning** [8] - **Active Learning** ## Installation We **highly recommend** to run PaddleFL in Docker ```sh #Pull and run the docker docker pull hub.baidubce.com/paddlefl/paddle_fl:latest docker run --name --net=host -it -v $PWD:/root /bin/bash #Install paddle_fl pip install paddle_fl ``` If you want to compile and install from source code, please click [here](./docs/source/md/compile_and_install.md) to get instructions. We also prepare a stable redis package for you to download and install, which will be used in tasks with MPC. ```sh wget --no-check-certificate https://paddlefl.bj.bcebos.com/redis-stable.tar tar -xf redis-stable.tar cd redis-stable && make ``` ## Easy deployment with kubernetes ### Data Parallel ```sh kubectl apply -f ./python/paddle_fl/paddle_fl/examples/k8s_deployment/master.yaml ``` Please refer [K8S deployment example](./python/paddle_fl/paddle_fl/examples/k8s_deployment/README.md) for details You can also refer [K8S cluster application and kubectl installation](./python/paddle_fl/paddle_fl/examples/k8s_deployment/deploy_instruction.md) to deploy your K8S cluster ### Federated Learning with MPC To be added. ## Framework design of PaddleFL There are mainly two components in PaddleFL: **Data Parallel** and **Federated Learning with MPC (PFM)**. With Data Parallel, distributed data holders can finish their Federated Learning tasks based on common horizontal federated strategies, such as FedAvg, DPSGD, etc. Besides, PFM is implemented based on secure multi-party computation (MPC) to enable secure training and prediction. As a key product of PaddleFL, PFM intrinsically supports federated learning well, including horizontal, vertical and transfer learning scenarios. Users with little cryptography expertise can also train models or conduct prediction on encrypted data. Below, we will introduce them into details: ### Data Parallel In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows: #### A. Compile Time - **FL-Strategy**: a user can define federated learning strategies with FL-Strategy such as Fed-Avg[2] - **User-Defined-Program**: PaddlePaddle's program that defines the machine learning model structure and training strategies such as multi-task learning. - **Distributed-Config**: In federated learning, a system should be deployed in distributed settings. Distributed Training Config defines distributed training node information. - **FL-Job-Generator**: Given FL-Strategy, User-Defined Program and Distributed Training Config, FL-Job for federated server and worker will be generated through FL Job Generator. FL-Jobs will be sent to organizations and federated parameter server for run-time execution. #### B. Run Time - **FL-Server**: federated parameter server that usually runs in cloud or third-party clusters. - **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server. - **FL-scheduler**: Decide which set of trainers can join the training before each updating cycle. For more instructions, please refer to the [examples](./python/paddle_fl/paddle_fl/examples) ### Federated Learning with MPC ## Easy deployment with kubernetes ### Data Parallel ```sh kubectl apply -f ./python/paddle_fl/paddle_fl/examples/k8s_deployment/master.yaml ``` Please refer [K8S deployment example](./python/paddle_fl/paddle_fl/examples/k8s_deployment/README.md) for details You can also refer [K8S cluster application and kubectl installation](./python/paddle_fl/paddle_fl/examples/k8s_deployment/deploy_instruction.md) to deploy your K8S cluster ### Federated Learning with MPC To be added. ## Benchmark task ### Horzontal Federated Learning Gru4Rec [9] introduces recurrent neural network model in session-based recommendation. PaddlePaddle's Gru4Rec implementation is in https://github.com/PaddlePaddle/models/tree/develop/PaddleRec/gru4rec. An example is given in [Gru4Rec in Federated Learning](https://paddlefl.readthedocs.io/en/latest/examples/gru4rec_examples.html) ### Federated Learning with MPC #### A. Convergence of paddle_fl.mpc vs paddle ##### 1. Training Parameters - Dataset: Boston house price dataset - Number of Epoch: 20 - Batch Size: 10 ##### 2. Experiment Results | Epoch/Step | paddle_fl.mpc | Paddle | | ---------- | ------------- | ------ | | Epoch=0, Step=0 | 738.39491 | 738.46204 | | Epoch=1, Step=0 | 630.68834 | 629.9071 | | Epoch=2, Step=0 | 539.54683 | 538.1757 | | Epoch=3, Step=0 | 462.41159 | 460.64722 | | Epoch=4, Step=0 | 397.11516 | 395.11017 | | Epoch=5, Step=0 | 341.83102 | 339.69815 | | Epoch=6, Step=0 | 295.01114 | 292.83597 | | Epoch=7, Step=0 | 255.35141 | 253.19429 | | Epoch=8, Step=0 | 221.74739 | 219.65132 | | Epoch=9, Step=0 | 193.26459 | 191.25981 | | Epoch=10, Step=0 | 169.11423 | 167.2204 | | Epoch=11, Step=0 | 148.63138 | 146.85835 | | Epoch=12, Step=0 | 131.25081 | 129.60391 | | Epoch=13, Step=0 | 116.49708 | 114.97599 | | Epoch=14, Step=0 | 103.96669 | 102.56854 | | Epoch=15, Step=0 | 93.31706 | 92.03858 | | Epoch=16, Step=0 | 84.26219 | 83.09653 | | Epoch=17, Step=0 | 76.55664 | 75.49785 | | Epoch=18, Step=0 | 69.99673 | 69.03561 | | Epoch=19, Step=0 | 64.40562 | 63.53539 | ## On Going and Future Work - Vertial Federated Learning will support more algorithms. - Add K8S deployment scheme for Paddle Encrypted. ## Reference [1]. Jakub Konečný, H. Brendan McMahan, Daniel Ramage, Peter Richtárik. **Federated Optimization: Distributed Machine Learning for On-Device Intelligence.** 2016 [2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agüera y Arcas. **Federated Learning of Deep Networks using Model Averaging.** 2017 [3]. Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016 [4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019 [5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation.** In Proc. of SecureComm 2019 [6]. Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. **Deep Learning with Differential Privacy.** 2016 [7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016 [8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018 [9]. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk. **Session-based Recommendations with Recurrent Neural Networks.** 2016 [10]. https://en.wikipedia.org/wiki/Secret_sharing [11]. Payman Mohassel and Peter Rindal. **ABY3: A Mixed Protocol Framework for Machine Learning.** In Proc. of CCS 2018