@@ -76,7 +76,7 @@ Because of the different node types on the heterogeneous graph, the message deli
...
@@ -76,7 +76,7 @@ Because of the different node types on the heterogeneous graph, the message deli
## Large-Scale: Support distributed graph storage and distributed training algorithms
## Large-Scale: Support distributed graph storage and distributed training algorithms
In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so tcan easily set up a large scale distributed training algorithm with MPI clusters.
In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so it can easily set up a large scale distributed training algorithm with MPI clusters.
<imgsrc="./docs/source/_static/distributed_frame.png"alt="The distributed frame of PGL"width="800">
<imgsrc="./docs/source/_static/distributed_frame.png"alt="The distributed frame of PGL"width="800">