未验证 提交 8bd6f3df 编写于 作者: D Dong Daxiang 提交者: GitHub

Merge pull request #32 from qjing666/document

Update PaddleFL document and quick start instructions
......@@ -32,7 +32,7 @@ In PaddleFL, horizontal and vertical federated learning strategies will be imple
## Framework design of PaddleFL
<img src='images/FL-training.png' width = "1000" height = "320" align="middle"/>
<img src='images/FL-training.png' width = "1000" height = "400" align="middle"/>
In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows:
......@@ -52,6 +52,8 @@ In PaddleFL, components for defining a federated learning task and training a fe
- **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server.
- **FL-scheduler**: Decide which set of trainers can join the training before each updating cycle.
## Install Guide and Quick-Start
Please reference [Quick Start](https://paddlefl.readthedocs.io/en/latest/instruction.html) for installation and quick-start example.
......
......@@ -29,7 +29,7 @@ PaddleFL是一个基于PaddlePaddle的开源联邦学习框架。研究人员可
## PaddleFL框架设计
<img src='images/FL-training.png' width = "1300" height = "310" align="middle"/>
<img src='images/FL-training.png' width = "1300" height = "450" align="middle"/>
在PaddeFL中,用于定义联邦学习任务和联邦学习训练工作的组件如下:
......@@ -49,6 +49,8 @@ PaddleFL是一个基于PaddlePaddle的开源联邦学习框架。研究人员可
- **FL-Worker**: 参与联合学习的每个组织都将有一个或多个与联合参数服务器通信的Worker。
- **FL-Scheduler**: 训练过程中起到调度Worker的作用,在每个更新周期前,决定哪些Worker可以参与训练。
## 安装指南和快速入门
请参考[快速开始](https://paddlefl.readthedocs.io/en/latest/instruction.html)
......
......@@ -14,16 +14,16 @@
## Start the service on Server side
```
#python
```sh
python server/receiver.py
```
## Start the request on User side
```
#python
```sh
python submitter.py
```
......
docs/source/_static/FL-framework.png

121.8 KB | W: | H:

docs/source/_static/FL-framework.png

84.5 KB | W: | H:

docs/source/_static/FL-framework.png
docs/source/_static/FL-framework.png
docs/source/_static/FL-framework.png
docs/source/_static/FL-framework.png
  • 2-up
  • Swipe
  • Onion skin
docs/source/_static/FL-training.png

202.8 KB | W: | H:

docs/source/_static/FL-training.png

125.4 KB | W: | H:

docs/source/_static/FL-training.png
docs/source/_static/FL-training.png
docs/source/_static/FL-training.png
docs/source/_static/FL-training.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -101,6 +101,7 @@ job_generator.generate_fl_job(
How to work in RunTime
```
python -u fl_scheduler.py >scheduler.log &
python -u fl_server.py >server0.log &
python -u fl_trainer.py 0 data/ >trainer0.log &
python -u fl_trainer.py 1 data/ >trainer1.log &
......
......@@ -70,6 +70,7 @@ job_generator.generate_fl_job(
### How to work in RunTime
```sh
python -u fl_scheduler.py >scheduler.log &
python -u fl_server.py >server0.log &
python -u fl_trainer.py 0 data/ >trainer0.log &
python -u fl_trainer.py 1 data/ >trainer1.log &
......
......@@ -26,7 +26,7 @@ In PaddleFL, horizontal and vertical federated learning strategies will be imple
## Framework design of PaddleFL
<img src='_static/FL-training.png' width = "1300" height = "310" align="middle"/>
<img src='_static/FL-training.png' width = "1300" height = "400" align="middle"/>
In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows:
......@@ -46,6 +46,8 @@ In PaddleFL, components for defining a federated learning task and training a fe
- **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server.
- **FL-scheduler**: Decide which set of trainers can join the training before each updating cycle.
## On Going and Future Work
- Experimental benchmark with public datasets in federated learning settings.
......
......@@ -60,6 +60,20 @@ We can define a secure service to send programs to each node in FLJob. There are
## Step 3: Start Federated Learning Run-Time
On FL Scheduler Node, number of servers and workers are defined. Besides, the number of workers that participate in each upating cycle is also determined. Finally, the FL Scheduler waits servers and workers to initialize.
```python
from paddle_fl.core.scheduler.agent_master import FLScheduler
worker_num = 2
server_num = 1
scheduler = FLScheduler(worker_num,server_num)
scheduler.set_sample_worker_num(worker_num)
scheduler.init_env()
print("init env done.")
scheduler.start_fl_training()
```
On FL Trainer Node, a training script is defined as follows:
``` python
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册