@@ -47,11 +47,10 @@ All the operators above access the `lookup_table` directly in memory. If we chan
...
@@ -47,11 +47,10 @@ All the operators above access the `lookup_table` directly in memory. If we chan
-`Load()` load the model from a persistent file system, such as HDFS.
-`Load()` load the model from a persistent file system, such as HDFS.
The details will be proposed in another PR.
The details will be proposed in another PR.
1. Design and implement `lookup_table_op` and `lookup_table_grad_op ` with distributed lookup table client.
1. Design and implement `lookup_table_op` and `lookup_table_grad_op ` with distributed lookup table client.
1. Implement the Python wrapper for above ops, users can choose and config to use these ops.
1. Implement the Python wrapper for above ops, users can choose and config to use these ops.
1. The distributed Fluid should support this `distributed lookup table service` on kubernetes.
1. The distributed Fluid should support this `distributed lookup table service` on kubernetes.
1. Implement a `distributed transpiler` that can change the program into a distributed one which will use the `distributed lookup table service`.
1. Implement a `distributed transpiler` that can change the program into a distributed one which will use the `distributed lookup table service`.
## Things need to be discussed
## Things need to be discussed
In the above design, the parameter update is done within `distributed lookup table service`, the interface is `Push(grad, update_method)`, this is different than the current design of PaddlePaddle Fluid, Currently, parameter update is done by Operators. So how should we impelement these update_method?
In the above design, the parameter update is done within `distributed lookup table service`, the interface is `Push(grad, update_method)`, this is different than the current design of PaddlePaddle Fluid. Currently, parameter update is done by Operators. How should we impelement these update_method?