diff --git a/doc/fluid/design/dist_train/distributed_lookup_table_design.md b/doc/fluid/design/dist_train/distributed_lookup_table_design.md index 20ed6f31dfad3026793a3b347b91e226984d394c..3d2e9ef19e630142b6225fe8c8c028aad30e92cd 100644 --- a/doc/fluid/design/dist_train/distributed_lookup_table_design.md +++ b/doc/fluid/design/dist_train/distributed_lookup_table_design.md @@ -67,7 +67,7 @@ operator: ![lookup table training](./src/lookup_table_training.png) At the beginning of training, paddle only malloc the memory for the lookup table at parameter server side, the id and it's value will not be initialized. During training, when a parameter server received an Id, if it is already in the lookup table, it will return the existing parameter, if the id does not exist, paddle will add it into the lookup table and initialize the value for it. -### Problem3: parameter load and save +### Problem 3: parameter load and save For common parameters, paddle use trainer to save and load them. But for distribute lookup table, trainer can not do this because it's large size.