diff --git a/doc/fluid/design/dist_train/distributed_lookup_table_design.md b/doc/fluid/design/dist_train/distributed_lookup_table_design.md index b8fa6b46d718680795776c0d3b3820d5f3446ecd..97f890c88e778a59ea475e984ccbc28cf026fc5b 100644 --- a/doc/fluid/design/dist_train/distributed_lookup_table_design.md +++ b/doc/fluid/design/dist_train/distributed_lookup_table_design.md @@ -124,9 +124,8 @@ optimization algorithm $f$ runs on the storage service. For another design, we can implement a distributed sparse table in Fluid, and don't need to maintain an external storage component while training. -Prior to reading this design, it would be useful for the reader to make themselves -familiar with Fluid [Distributed Training Architecture](./distributed_architecture.md) -and [Parameter Server](./parameter_server.md). +You may need to read Fluid [Distributed Training Architecture](./distributed_architecture.md) +and [Parameter Server](./parameter_server.md) before going on. ![fluid lookup remote table](./src/fluid_lookup_remote_table.png)