提交 af2d926c 编写于 作者: X xjqbest

fix typo

test=develop
上级 da9d2e0a
...@@ -76,7 +76,7 @@ Because of the different node types on the heterogeneous graph, the message deli ...@@ -76,7 +76,7 @@ Because of the different node types on the heterogeneous graph, the message deli
## Large-Scale: Support distributed graph storage and distributed training algorithms ## Large-Scale: Support distributed graph storage and distributed training algorithms
In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so tcan easily set up a large scale distributed training algorithm with MPI clusters. In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so it can easily set up a large scale distributed training algorithm with MPI clusters.
<img src="./docs/source/_static/distributed_frame.png" alt="The distributed frame of PGL" width="800"> <img src="./docs/source/_static/distributed_frame.png" alt="The distributed frame of PGL" width="800">
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册