提交 de764c05 编写于 作者: D dongdaxiang

modify doc

上级 4448e541
# Minimal makefile for Sphinx documentation # Minimal makefile for Sphinx documentation
# #
# You can set these variables from the command line, and also # You can set these variables from the command line.
# from the environment for the first two. SPHINXOPTS =
SPHINXOPTS ?= SPHINXBUILD = sphinx-build
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source SOURCEDIR = source
BUILDDIR = build BUILDDIR = build
...@@ -17,4 +16,4 @@ help: ...@@ -17,4 +16,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new # Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile %: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
\ No newline at end of file
:github_url: https://github.com/PaddlePaddle/PaddleFL :github_url: https://github.com/PaddlePaddle/PaddleFL
.. PaddleFL documentation master file, created by
sphinx-quickstart on Sat Sep 28 10:48:34 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Paddle Federated Learning
====================================
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
:caption: QuickStart :caption: Contents:
:hidden:
quick_start.rst
.. mdinclude:: markdown/quick_start.md Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
Quick Start Quick Start
=========== ===========
...@@ -17,26 +27,25 @@ Quick Start ...@@ -17,26 +27,25 @@ Quick Start
:caption: Quick Start :caption: Quick Start
:hidden: :hidden:
quick_start.rst instruction.rst
See quick_start_ for quick start.
.. _instruction: quick_start.html See instruction_ for quick start.
.. _instruction: instruction.html
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Examples :caption: Examples
examples/ctr_examples.rst examples/gru4rec_examples.rst
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: API Reference :caption: API Reference
api/paddle_fl api/paddle_fl
The Team The Team
======== ========
...@@ -46,7 +55,7 @@ The Team ...@@ -46,7 +55,7 @@ The Team
:hidden: :hidden:
team.rst team.rst
PaddleFL is developed and maintained by Nimitz PaddleFL is developed and maintained by Nimitz group at Baidu
License License
======= =======
......
...@@ -3,21 +3,18 @@ Quick Start Instructions ...@@ -3,21 +3,18 @@ Quick Start Instructions
Install PaddleFL Install PaddleFL
----------- -----------
To install Paddle Federated Learning, we need the following packages. To install PaddleFL, we need the following packages.
.. code-block:: sh .. code-block:: sh
paddlepaddle >= 1.6 (Faster performance on 1.6) paddlepaddle >= 1.6
networkx networkx
cython
We can simply install PaddleFL by pip. We can simply install pgl by
.. code-block:: sh .. code-block:: sh
pip install paddle_fl python setup.py install
.. mdinclude:: markdown/quick_start.md
.. mdinclude:: md/quick_start.md
# PaddleFL # PaddleFL
PaddleFL is an open source federated learning framework based on PaddlePaddle. Researchers can easily replicate and compare different federated learning algorithms with PaddleFL. Developers can also benefit from PaddleFL in that it is easy to deploy a federated learning system in large scale distributed clusters. In PaddleFL, serveral federated learning strategies will be provided with application in computer vision, natural language processing, recommendation and so on. Application of traditional machine learning training strategies such as Multi-task learning, Transfer Learning in Federated Learning settings will be provided. Based on PaddlePaddle's large scale distributed training and elastic scheduling of training job on Kubernetes, PaddleFL can be easily deployed based on full-stack open sourced software. PaddleFL is an open source federated learning framework based on PaddlePaddle. Researchers can easily replicate and compare different federated learning algorithms with PaddleFL. Developers can also benefit from PaddleFL in that it is easy to deploy a federated learning system in large scale distributed clusters. In PaddleFL, serveral federated learning strategies will be provided with application in computer vision, natural language processing, recommendation and so on. Application of traditional machine learning training strategies such as Multi-task learning, Transfer Learning in Federated Learning settings will be provided. Based on PaddlePaddle's large scale distributed training and elastic scheduling of training job on Kubernetes, PaddleFL can be easily deployed based on full-stack open sourced software.
## Federated Learning # Federated Learning
Data is becoming more and more expensive nowadays, and sharing of raw data is very hard across organizations. Federated Learning aims to solve the problem of data isolation and secure sharing of data knowledge among organizations. The concept of federated learning is proposed by researchers in Google [1, 2, 3]. Data is becoming more and more expensive nowadays, and sharing of raw data is very hard across organizations. Federated Learning aims to solve the problem of data isolation and secure sharing of data knowledge among organizations. The concept of federated learning is proposed by researchers in Google [1, 2, 3].
## Overview of PaddleFL ## Overview of PaddleFL
<img src='images/FL-framework.png' width = "1300" height = "310" align="middle"/> <img src='images/FL-framework.png' width = "1300" height = "310" align="middle"/>
In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL.
In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL.
#### Federated Learning Strategy #### Federated Learning Strategy
- **Vertical Federated Learning**: Logistic Regression with PrivC, Neural Network with third-party PrivC [5] - **Vertical Federated Learning**: Logistic Regression with PrivC, Neural Network with third-party PrivC [5]
- **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6] - **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6]
...@@ -26,7 +24,6 @@ In PaddleFL, horizontal and vertical federated learning strategies will be imple ...@@ -26,7 +24,6 @@ In PaddleFL, horizontal and vertical federated learning strategies will be imple
- **Active Learning** - **Active Learning**
## Framework design of PaddleFL ## Framework design of PaddleFL
<img src='images/FL-training.png' width = "1300" height = "310" align="middle"/> <img src='images/FL-training.png' width = "1300" height = "310" align="middle"/>
...@@ -49,27 +46,6 @@ In PaddleFL, components for defining a federated learning task and training a fe ...@@ -49,27 +46,6 @@ In PaddleFL, components for defining a federated learning task and training a fe
- **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server. - **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server.
## Install Guide
``` shell
python setup.py install
python -c "import paddle_fl as fl"
```
## Quick-Start Example
``` shell
cd paddle_fl/demo
python fl_master.py
python fl_server.py 2> server0.errlog > server0.stdlog &
python fl_trainer.py 0 2> trainer0.errlog > trainer0.stdlog &
python fl_trainer.py 1 2> trainer0.errlog > trainer0.stdlog &
```
## Benchmark task
Gru4Rec [9] introduces recurrent neural network model in session-based recommendation. PaddlePaddle's Gru4Rec implementation is in https://github.com/PaddlePaddle/models/tree/develop/PaddleRec/gru4rec.
## On Going and Future Work ## On Going and Future Work
- Experimental benchmark with public datasets in federated learning settings. - Experimental benchmark with public datasets in federated learning settings.
...@@ -80,20 +56,21 @@ Gru4Rec [9] introduces recurrent neural network model in session-based recommend ...@@ -80,20 +56,21 @@ Gru4Rec [9] introduces recurrent neural network model in session-based recommend
## Reference ## Reference
[1]. Jakub Konečný, H. Brendan McMahan, Daniel Ramage, Peter Richtárik. **Federated Optimization: Distributed Machine Learning for On-Device Intelligence.** 2016 [1]. Jakub Kone\u010Dn, H. Brendan McMahan, Daniel Ramage, Peter Richtik. **Federated Optimization: D\
istributed Machine Learning for On-Device Intelligence.** 2016
[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agüera y Arcas. **Federated Learning of Deep Networks using Model Averaging.** 2017 [2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agera y Arcas. **Federated Learning of Deep\
Networks using Model Averaging.** 2017
[3]. Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016 [3]. Jakub Kone\u010Dn, H. Brendan McMahan, Felix X. Yu, Peter Richtik, Ananda Theertha Suresh, Davepen Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016
[4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019 [4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019
[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation. In Proceedings of 15th EAI International Conference on Security and Privacy in Communication Networks.** SecureComm 2019 [5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation. In Proceedings of 15th EAI International Conference on Security and Privacy in Communication Networks.** SecureComm 2019
[6]. Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. **Deep Learning with Differential Privacy.** 2016 [6]. Mart Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. *\
*Deep Learning with Differential Privacy.** 2016
[7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016 [7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016
[8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018 [8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018
\ No newline at end of file
[9]. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk. **Session-based Recommendations with Recurrent Neural Networks.** 2016
\ No newline at end of file
## Step 1: Define Federated Learning Master and Generate FLJob
We define very simple multiple layer perceptron for demonstration. When multiple organizations
agree to share data knowledge through PaddleFL, a model can be defined with agreement from these organizations. A FLJob can be generated and saved. Programs needed to be run each node will be generated separately in FLJob.
```python
import paddle.fluid as fluid
import paddle_fl as fl
from paddle_fl.core.master.job_generator import JobGenerator
from paddle_fl.core.strategy.fl_strategy_base import FLStrategyFactory
class Model(object):
def __init__(self):
pass
def mlp(self, inputs, label, hidden_size=128):
self.concat = fluid.layers.concat(inputs, axis=1)
self.fc1 = fluid.layers.fc(input=self.concat, size=256, act='relu')
self.fc2 = fluid.layers.fc(input=self.fc1, size=128, act='relu')
self.predict = fluid.layers.fc(input=self.fc2, size=2, act='softmax')
self.sum_cost = fluid.layers.cross_entropy(input=self.predict, label=label)
self.accuracy = fluid.layers.accuracy(input=self.predict, label=label)
self.loss = fluid.layers.reduce_mean(self.sum_cost)
self.startup_program = fluid.default_startup_program()
inputs = [fluid.layers.data( \
name=str(slot_id), shape=[5],
dtype="float32")
for slot_id in range(3)]
label = fluid.layers.data( \
name="label",
shape=[1],
dtype='int64')
model = Model()
model.mlp(inputs, label)
job_generator = JobGenerator()
optimizer = fluid.optimizer.SGD(learning_rate=0.1)
job_generator.set_optimizer(optimizer)
job_generator.set_losses([model.loss])
job_generator.set_startup_program(model.startup_program)
job_generator.set_infer_feed_and_target_names(
[x.name for x in inputs], [model.predict.name])
build_strategy = FLStrategyFactory()
build_strategy.fed_avg = True
build_strategy.inner_step = 1
strategy = build_strategy.create_fl_strategy()
endpoints = ["127.0.0.1:8181"]
output = "fl_job_config"
job_generator.generate_fl_job(
strategy, server_endpoints=endpoints, worker_num=2, output=output)
```
## Step 2: Dispatch FL Worker Job and FL Server Job to Distributed Nodes
We can define a secure service to send programs to each node in FLJob. There are two types of nodes in distributed federated learning job. One is FL Server, the other is FL Trainer. A FL Trainer is owned by individual organization and an organization can have multiple FL Trainers given different amount of data knowledge the organization is willing to share. A FL Server is owned by a secure distributed training cluster. By means of security of the cluster, all organizations participated in the Federated Training Job should agree to trust the cluster is secure.
## Step 3: Start Trainer FLJob and Server FLJob
On FL Trainer Node, a training script is defined as follows:
``` python
from paddle_fl.core.trainer.fl_trainer import FLTrainerFactory
from paddle_fl.core.master.fl_job import FLRunTimeJob
import numpy as np
import sys
def reader():
for i in range(1000):
data_dict = {}
for i in range(3):
data_dict[str(i)] = np.random.rand(1, 5).astype('float32')
data_dict["label"] = np.random.randint(2, size=(1, 1)).astype('int64')
yield data_dict
trainer_id = int(sys.argv[1]) # trainer id for each guest
job_path = "fl_job_config"
job = FLRunTimeJob()
job.load_trainer_job(job_path, trainer_id)
trainer = FLTrainerFactory().create_fl_trainer(job)
trainer.start()
output_folder = "fl_model"
step_i = 0
while not trainer.stop():
step_i += 1
print("batch %d start train" % (step_i))
trainer.train_inner_loop(reader)
trainer.save_inference_program(output_folder)
```
On FL Server Node, a training script is defined as follows:
```python
import paddle_fl as fl
import paddle.fluid as fluid
from paddle_fl.core.server.fl_server import FLServer
from paddle_fl.core.master.fl_job import FLRunTimeJob
server = FLServer()
server_id = 0
job_path = "fl_job_config"
job = FLRunTimeJob()
job.load_server_job(job_path, server_id)
server.set_server_job(job)
server.start()
```
paddle_fl
===
.. toctree::
:maxdepth: 1
paddle_fl
paddle_fl.common module: Paddle FL Common Functions
===============================
.. automodule:: paddle_fl.common.rst
:members:
:undoc-members:
:show-inheritance:
paddle_fl.core.master module: Paddle FL master
===============================
.. automodule:: paddle_fl.core.master
:members:
:undoc-members:
:show-inheritance:
paddle_fl.core.server module: Paddle FL Server
===============================
.. automodule:: paddle_fl.core.server.rst
:members:
:undoc-members:
:show-inheritance:
paddle_fl.core.trainer module: Paddle FL Trainer
===============================
.. automodule:: paddle_fl.core.trainer.rst
:members:
:undoc-members:
:show-inheritance:
paddle_fl.dataset module: Paddle FL Dataset
===============================
.. automodule:: paddle_fl.dataset.rst
:members:
:undoc-members:
:show-inheritance:
paddle_fl.reader module: Paddle FL Reader
===============================
.. automodule:: paddle_fl.reader.rst
:members:
:undoc-members:
:show-inheritance:
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册