From af2d926c25193e1f98044dbae2700ab1fb7febe1 Mon Sep 17 00:00:00 2001 From: xjqbest <173596896@qq.com> Date: Tue, 10 Dec 2019 22:51:37 +0800 Subject: [PATCH] fix typo test=develop --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 201173e..5a5cd38 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,7 @@ Because of the different node types on the heterogeneous graph, the message deli ## Large-Scale: Support distributed graph storage and distributed training algorithms -In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so tcan easily set up a large scale distributed training algorithm with MPI clusters. +In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so it can easily set up a large scale distributed training algorithm with MPI clusters. The distributed frame of PGL -- GitLab