From 24b2d14e4c5a41d1aa262f935a86593d4d8bed5e Mon Sep 17 00:00:00 2001 From: Xin Pan Date: Thu, 3 May 2018 15:23:15 +0800 Subject: [PATCH] fix --- doc/fluid/design/dist_train/distributed_traing_review.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/doc/fluid/design/dist_train/distributed_traing_review.md b/doc/fluid/design/dist_train/distributed_traing_review.md index 032452c615..a4604705a8 100644 --- a/doc/fluid/design/dist_train/distributed_traing_review.md +++ b/doc/fluid/design/dist_train/distributed_traing_review.md @@ -1,8 +1,6 @@ # Parallelism, Asynchronous, Synchronous, Codistillation -[TOC] - For valuable models, it’s worth using more hardware resources to reduce the training time and improve the final model quality. This doc discuss various solutions, their empirical results and some latest researches. # Model Parallelism -- GitLab