diff --git a/doc/fluid/design/dist_train/distributed_traing_review.md b/doc/fluid/design/dist_train/distributed_traing_review.md index 032452c615f379cd87ef1be0ff31fc944f0ec17a..a4604705a87f15b45a9024252bd27c0397d39557 100644 --- a/doc/fluid/design/dist_train/distributed_traing_review.md +++ b/doc/fluid/design/dist_train/distributed_traing_review.md @@ -1,8 +1,6 @@ # Parallelism, Asynchronous, Synchronous, Codistillation -[TOC] - For valuable models, it’s worth using more hardware resources to reduce the training time and improve the final model quality. This doc discuss various solutions, their empirical results and some latest researches. # Model Parallelism