提交 4d421423 编写于 作者: D dongdaxiang

add docs

上级 de764c05
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
......
sphinx==2.1.0
mistune
crate-docs-theme
paddlepaddle
sphinx_rtd_theme
paddle_fl
===
.. toctree::
:maxdepth: 1
paddle_fl
paddle_fl.core.master: PaddleFL Compile-Time
============================================
.. automodule:: paddle_fl.core.master
:members:
:undoc-members:
:show-inheritance:
paddle_fl.core.server: Server Run-Time
======================================
.. automodule:: paddle_fl.core.server
:members:
:undoc-members:
:show-inheritance:
paddle_fl.core.strategy: Federated Learning Strategies
======================================================
.. automodule:: paddle_fl.core.strategy
:members:
:undoc-members:
:show-inheritance:
paddle_fl.core.trainer: Trainer Run-Time
========================================
.. automodule:: paddle_fl.core.trainer
:members:
:undoc-members:
:show-inheritance:
API Reference
=============
.. toctree::
paddle_fl.core.master
paddle_fl.core.strategy
paddle_fl.core.trainer
paddle_fl.core.server
# -*- coding: utf-8 -*-
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# http://www.sphinx-doc.org/en/master/config
# -- Path setup --------------------------------------------------------------
......@@ -12,184 +24,77 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
"""
conf.py
"""
import os
import sys
sys.path.append(os.path.abspath('../../paddle_fl/'))
sys.path.append(os.path.abspath('../..'))
sys.path.append(os.path.abspath('..'))
import sphinx_rtd_theme
# -- Project information -----------------------------------------------------
project = u'PaddleFL'
copyright = u'2019, guru4elephant'
author = u'guru4elephant'
project = 'PaddleFL'
copyright = '2019, PaddlePaddle'
author = 'PaddlePaddle'
# The short X.Y version
version = u''
# The full version, including alpha/beta/rc tags
release = u'0.1.0'
release = '0.1.0.beta'
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.mathjax',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode',
'sphinx.ext.githubpages',
'sphinx.ext.todo', 'sphinx.ext.viewcode', 'sphinx.ext.mathjax',
'sphinx.ext.autodoc', 'sphinx.ext.napoleon', "markdown2rst"
]
# Support Inline mathjax
m2r_disable_inline_math = False
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = None
source_suffix = ['.rst', '.md']
#exclude_patterns = ['pgl.graph_kernel', 'pgl.layers.conv']
lanaguage = "zh_cn"
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_show_sourcelink = False
#html_logo = 'pgl_logo.png'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'PaddleFLdoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
'''
html_theme_options = {
'canonical_url': '',
'analytics_id': 'UA-XXXXXXX-1', # Provided by Google in your dashboard
'logo_only': False,
'display_version': True,
'prev_next_buttons_location': 'bottom',
'style_external_links': False,
'vcs_pageview_mode': '',
'style_nav_header_background': 'white',
# Toc options
'collapse_navigation': True,
'sticky_navigation': True,
'navigation_depth': 4,
'includehidden': True,
'titles_only': False
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'PaddleFL.tex', u'PaddleFL Documentation',
u'guru4elephant', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'paddlefl', u'PaddleFL Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'PaddleFL', u'PaddleFL Documentation',
author, 'PaddleFL', 'One line description of project.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for intersphinx extension ---------------------------------------
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None}
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
'''
The Team
========
PGL is developed and maintained by NLP and Paddle Teams at Baidu
.. mdinclude:: md/gru4rec_examples.md
# Gru4Rec for session-based recommendation
[Graph Convolutional Network \(GCN\)](https://arxiv.org/abs/1609.02907) is a powerful neural network designed for machine learning on graphs. Based on PGL, we reproduce GCN algorithms and reach the same level of indicators as the paper in citation network benchmarks.
### Simple example to build GCN
To build a gcn layer, one can use our pre-defined ```pgl.layers.gcn``` or just write a gcn layer with message passing interface.
```python
import paddle.fluid as fluid
def gcn_layer(graph_wrapper, node_feature, hidden_size, act):
def send_func(src_feat, dst_feat, edge_feat):
return src_feat["h"]
def recv_func(msg):
return fluid.layers.sequence_pool(msg, "sum")
message = graph_wrapper.send(send_func, nfeat_list=[("h", node_feature)])
output = graph_wrapper.recv(recv_func, message)
output = fluid.layers.fc(output, size=hidden_size, act=act)
return output
```
### Datasets
The datasets contain three citation networks: CORA, PUBMED, CITESEER. The details for these three datasets can be found in the [paper](https://arxiv.org/abs/1609.02907).
### Dependencies
- paddlepaddle>=1.4 (The speed can be faster in 1.5.)
- pgl
### Performance
We train our models for 200 epochs and report the accuracy on the test dataset.
| Dataset | Accuracy | Speed with paddle 1.4 <br> (epoch time) | Speed with paddle 1.5 <br> (epoch time)|
| --- | --- | --- |---|
| Cora | ~81% | 0.0106s | 0.0104s |
| Pubmed | ~79% | 0.0210s | 0.0154s |
| Citeseer | ~71% | 0.0175s | 0.0177s |
### How to run
For examples, use gpu to train gcn on cora dataset.
```
python train.py --dataset cora --use_cuda
```
#### Hyperparameters
- dataset: The citation dataset "cora", "citeseer", "pubmed".
- use_cuda: Use gpu if assign use_cuda.
### View the Code
See the code [here](gcn_examples_code.html)
:github_url: https://github.com/PaddlePaddle/PaddleFL
.. PaddleFL documentation master file, created by
sphinx-quickstart on Sat Sep 28 10:48:34 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Paddle Federated Learning
====================================
.. toctree::
:maxdepth: 2
:caption: Contents:
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
Quick Start
===========
.. toctree::
......@@ -34,6 +14,16 @@ See instruction_ for quick start.
.. _instruction: instruction.html
.. toctree::
:maxdepth: 1
:caption: Introduction
:hidden:
introduction.rst
.. mdinclude:: md/introduction.md
.. toctree::
:maxdepth: 1
:caption: Examples
......@@ -47,6 +37,7 @@ See instruction_ for quick start.
api/paddle_fl
The Team
========
.. toctree::
......@@ -55,8 +46,21 @@ The Team
:hidden:
team.rst
PaddleFL is developed and maintained by Nimitz group at Baidu
PaddleFL is developed and maintained by Nimitz Team at Baidu
.. toctree::
:maxdepth: 1
:caption: Reference
:hidden:
reference.rst
.. mdinclude:: md/reference.md
License
=======
PaddleFL uses Apache License 2.0.
......@@ -2,19 +2,20 @@ Quick Start Instructions
========================
Install PaddleFL
-----------
----------------
To install PaddleFL, we need the following packages.
.. code-block:: sh
paddlepaddle >= 1.6
networkx
paddlepaddle >= 1.6
networkx
We can simply install pgl by
We can run
.. code-block:: sh
python setup.py install
python setup.py install
.. mdinclude:: md/quick_start.md
.. mdinclude:: md/introduction.md
......@@ -8,7 +8,7 @@ Data is becoming more and more expensive nowadays, and sharing of raw data is ve
## Overview of PaddleFL
<img src='images/FL-framework.png' width = "1300" height = "310" align="middle"/>
<img src='_static/FL-framework.png' width = "1300" height = "310" align="middle"/>
In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL.
#### Federated Learning Strategy
......@@ -26,7 +26,7 @@ In PaddleFL, horizontal and vertical federated learning strategies will be imple
## Framework design of PaddleFL
<img src='images/FL-training.png' width = "1300" height = "310" align="middle"/>
<img src='_static/FL-training.png' width = "1300" height = "310" align="middle"/>
In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows:
......@@ -54,23 +54,3 @@ In PaddleFL, components for defining a federated learning task and training a fe
- Vertical Federated Learning Strategies and more horizontal federated learning strategies will be open sourced.
## Reference
[1]. Jakub Kone\u010Dn, H. Brendan McMahan, Daniel Ramage, Peter Richtik. **Federated Optimization: D\
istributed Machine Learning for On-Device Intelligence.** 2016
[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agera y Arcas. **Federated Learning of Deep\
Networks using Model Averaging.** 2017
[3]. Jakub Kone\u010Dn, H. Brendan McMahan, Felix X. Yu, Peter Richtik, Ananda Theertha Suresh, Davepen Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016
[4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019
[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation. In Proceedings of 15th EAI International Conference on Security and Privacy in Communication Networks.** SecureComm 2019
[6]. Mart Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. *\
*Deep Learning with Differential Privacy.** 2016
[7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016
[8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018
\ No newline at end of file
# PaddleFL
PaddleFL is an open source federated learning framework based on PaddlePaddle. Researchers can easily replicate and compare different federated learning algorithms with PaddleFL. Developers can also benefit from PaddleFL in that it is easy to deploy a federated learning system in large scale distributed clusters. In PaddleFL, serveral federated learning strategies will be provided with application in computer vision, natural language processing, recommendation and so on. Application of traditional machine learning training strategies such as Multi-task learning, Transfer Learning in Federated Learning settings will be provided. Based on PaddlePaddle's large scale distributed training and elastic scheduling of training job on Kubernetes, PaddleFL can be easily deployed based on full-stack open sourced software.
# Federated Learning
Data is becoming more and more expensive nowadays, and sharing of raw data is very hard across organizations. Federated Learning aims to solve the problem of data isolation and secure sharing of data knowledge among organizations. The concept of federated learning is proposed by researchers in Google [1, 2, 3].
## Overview of PaddleFL
<img src='images/FL-framework.png' width = "1300" height = "310" align="middle"/>
In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL.
#### Federated Learning Strategy
- **Vertical Federated Learning**: Logistic Regression with PrivC, Neural Network with third-party PrivC [5]
- **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6]
#### Training Strategy
- **Multi Task Learning** [7]
- **Transfer Learning** [8]
- **Active Learning**
## Framework design of PaddleFL
<img src='images/FL-training.png' width = "1300" height = "310" align="middle"/>
In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows:
#### Compile Time
- **FL-Strategy**: a user can define federated learning strategies with FL-Strategy such as Fed-Avg[1]
- **User-Defined-Program**: PaddlePaddle's program that defines the machine learning model structure and training strategies such as multi-task learning.
- **Distributed-Config**: In federated learning, a system should be deployed in distributed settings. Distributed Training Config defines distributed training node information.
- **FL-Job-Generator**: Given FL-Strategy, User-Defined Program and Distributed Training Config, FL-Job for federated server and worker will be generated through FL Job Generator. FL-Jobs will be sent to organizations and federated parameter server for run-time execution.
#### Run Time
- **FL-Server**: federated parameter server that usually runs in cloud or third-party clusters.
- **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server.
## On Going and Future Work
- Experimental benchmark with public datasets in federated learning settings.
- Federated Learning Systems deployment methods in Kubernetes.
- Vertical Federated Learning Strategies and more horizontal federated learning strategies will be open sourced.
## Reference
[1]. Jakub Kone\u010Dn, H. Brendan McMahan, Daniel Ramage, Peter Richtik. **Federated Optimization: D\
istributed Machine Learning for On-Device Intelligence.** 2016
[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agera y Arcas. **Federated Learning of Deep\
Networks using Model Averaging.** 2017
[3]. Jakub Kone\u010Dn, H. Brendan McMahan, Felix X. Yu, Peter Richtik, Ananda Theertha Suresh, Davepen Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016
[4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019
[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation. In Proceedings of 15th EAI International Conference on Security and Privacy in Communication Networks.** SecureComm 2019
[6]. Mart Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. *\
*Deep Learning with Differential Privacy.** 2016
[7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016
[8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018
\ No newline at end of file
# PaddleFL
PaddleFL is an open source federated learning framework based on PaddlePaddle. Researchers can easily replicate and compare different federated learning algorithms with PaddleFL. Developers can also benefit from PaddleFL in that it is easy to deploy a federated learning system in large scale distributed clusters. In PaddleFL, serveral federated learning strategies will be provided with application in computer vision, natural language processing, recommendation and so on. Application of traditional machine learning training strategies such as Multi-task learning, Transfer Learning in Federated Learning settings will be provided. Based on PaddlePaddle's large scale distributed training and elastic scheduling of training job on Kubernetes, PaddleFL can be easily deployed based on full-stack open sourced software.
# Federated Learning
Data is becoming more and more expensive nowadays, and sharing of raw data is very hard across organizations. Federated Learning aims to solve the problem of data isolation and secure sharing of data knowledge among organizations. The concept of federated learning is proposed by researchers in Google [1, 2, 3].
## Overview of PaddleFL
<img src='images/FL-framework.png' width = "1300" height = "310" align="middle"/>
In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL.
#### Federated Learning Strategy
- **Vertical Federated Learning**: Logistic Regression with PrivC, Neural Network with third-party PrivC [5]
- **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6]
#### Training Strategy
- **Multi Task Learning** [7]
- **Transfer Learning** [8]
- **Active Learning**
## Framework design of PaddleFL
<img src='images/FL-training.png' width = "1300" height = "310" align="middle"/>
In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows:
#### Compile Time
- **FL-Strategy**: a user can define federated learning strategies with FL-Strategy such as Fed-Avg[1]
- **User-Defined-Program**: PaddlePaddle's program that defines the machine learning model structure and training strategies such as multi-task learning.
- **Distributed-Config**: In federated learning, a system should be deployed in distributed settings. Distributed Training Config defines distributed training node information.
- **FL-Job-Generator**: Given FL-Strategy, User-Defined Program and Distributed Training Config, FL-Job for federated server and worker will be generated through FL Job Generator. FL-Jobs will be sent to organizations and federated parameter server for run-time execution.
#### Run Time
- **FL-Server**: federated parameter server that usually runs in cloud or third-party clusters.
- **FL-Worker**: Each organization participates in federated learning will have one or more federated workers that will communicate with the federated parameter server.
## On Going and Future Work
- Experimental benchmark with public datasets in federated learning settings.
- Federated Learning Systems deployment methods in Kubernetes.
- Vertical Federated Learning Strategies and more horizontal federated learning strategies will be open sourced.
## Reference
[1]. Jakub Kone\u010Dn, H. Brendan McMahan, Daniel Ramage, Peter Richtik. **Federated Optimization: D\
istributed Machine Learning for On-Device Intelligence.** 2016
[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agera y Arcas. **Federated Learning of Deep\
Networks using Model Averaging.** 2017
[3]. Jakub Kone\u010Dn, H. Brendan McMahan, Felix X. Yu, Peter Richtik, Ananda Theertha Suresh, Davepen Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016
[4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019
[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation. In Proceedings of 15th EAI International Conference on Security and Privacy in Communication Networks.** SecureComm 2019
[6]. Mart Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. *\
*Deep Learning with Differential Privacy.** 2016
[7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016
[8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018
\ No newline at end of file
## Step 1: Define Federated Learning Master and Generate FLJob
## Step 1: Define Federated Learning Compile-Time
We define very simple multiple layer perceptron for demonstration. When multiple organizations
agree to share data knowledge through PaddleFL, a model can be defined with agreement from these organizations. A FLJob can be generated and saved. Programs needed to be run each node will be generated separately in FLJob.
......@@ -54,11 +54,11 @@ job_generator.generate_fl_job(
strategy, server_endpoints=endpoints, worker_num=2, output=output)
```
## Step 2: Dispatch FL Worker Job and FL Server Job to Distributed Nodes
## Step 2: Issue FL Job to Organizations
We can define a secure service to send programs to each node in FLJob. There are two types of nodes in distributed federated learning job. One is FL Server, the other is FL Trainer. A FL Trainer is owned by individual organization and an organization can have multiple FL Trainers given different amount of data knowledge the organization is willing to share. A FL Server is owned by a secure distributed training cluster. By means of security of the cluster, all organizations participated in the Federated Training Job should agree to trust the cluster is secure.
## Step 3: Start Trainer FLJob and Server FLJob
## Step 3: Start Federated Learning Run-Time
On FL Trainer Node, a training script is defined as follows:
......
## Reference
[1]. Jakub Konen, H. Brendan McMahan, Daniel Ramage, Peter Richtik. **Federated Optimization: Distributed Machine Learning for On-Device Intelligence.** 2016
[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agera y Arcas. **Federated Learning of Deep Networks using Model Averaging.** 2017
[3]. Jakub Konen, H. Brendan McMahan, Felix X. Yu, Peter Richtik, Ananda Theertha Suresh, Davepen Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016
[4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019
[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation. In Proceedings of 15th EAI International Conference on Security and Privacy in Communication Networks.** SecureComm 2019
[6]. Mart Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. **Deep Learning with Differential Privacy.** 2016
[7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016
[8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018
\ No newline at end of file
.. mdinclude:: md/reference.md
The Team
========
PaddleFL is developed and maintained by Nimitz
PGL is developed and maintained by NLP and Paddle Teams at Baidu
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册