diff --git a/README.md b/README.md index 201173ecaaa8dbc21f0ffc2b9a9e9075e34537d2..5a5cd380601c82d028c4cc1a3ccd25c63ded1994 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,7 @@ Because of the different node types on the heterogeneous graph, the message deli ## Large-Scale: Support distributed graph storage and distributed training algorithms -In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so tcan easily set up a large scale distributed training algorithm with MPI clusters. +In most cases of large-scale graph learning, we need distributed graph storage and distributed training support. As shown in the following figure, PGL provided a general solution of large-scale training, we adopted [PaddleFleet](https://github.com/PaddlePaddle/Fleet) as our distributed parameter servers, which supports large scale distributed embeddings and a lightweighted distributed storage engine so it can easily set up a large scale distributed training algorithm with MPI clusters. The distributed frame of PGL