diff --git a/doc/CUBE_LOCAL.md b/doc/CUBE_LOCAL.md index f3e521b83ddcd19ebac02099478b8f1a13591ab5..ef474291eb15379238d4b39e0fad05df0546ef8c 100644 --- a/doc/CUBE_LOCAL.md +++ b/doc/CUBE_LOCAL.md @@ -1,11 +1,12 @@ # Cube: Sparse Parameter Server (Local Mode) -## Overview +([简体中文](./CUBE_LOCAL_CN.md)|English) +## Overview There are two examples on CTR under python / examples, they are criteo_ctr, criteo_ctr_with_cube. The former is to save the entire model during training, including sparse parameters. The latter is to cut out the sparse parameters and save them into two parts, one is the sparse parameter and the other is the dense parameter. Because the scale of sparse parameters is very large in industrial cases, reaching the order of 10 ^ 9. Therefore, it is not practical to start large-scale sparse parameter prediction on one machine. Therefore, we introduced Baidu's industrial-grade product Cube on the sparse parameter server for many years to provide distributed sparse parameter services. -The local mode of Cube is different from distributed Cube, which is designed to be convenient for developers to use in experiments and demos. If there is a demand for distributed sparse parameter service, please continue reading [Distributed Cube User Guide](Distributed Cube) after reading this document (still developing). +The local mode of Cube is different from distributed Cube, which is designed to be convenient for developers to use in experiments and demos. If there is a demand for distributed sparse parameter service, please continue reading [Distributed Cube User Guide](./Distributed_Cube) after reading this document (still developing). ## Example