diff --git a/docs/en/advanced_tutorials/index.rst b/docs/en/advanced_tutorials/index.rst index b0121559bf0267a3c21ff1e899dcda0486fb6a89..e741733e7added372c21e3e40896f5bd39d21727 100644 --- a/docs/en/advanced_tutorials/index.rst +++ b/docs/en/advanced_tutorials/index.rst @@ -2,7 +2,7 @@ advanced_tutorials ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 DataAugmentation_en.md distillation/index diff --git a/docs/en/algorithm_introduction/index.rst b/docs/en/algorithm_introduction/index.rst index 9511089103cad0f68b27f62a231b4936a404923a..e0fad797377ba99fb182578d9ae630b29339fbde 100644 --- a/docs/en/algorithm_introduction/index.rst +++ b/docs/en/algorithm_introduction/index.rst @@ -2,7 +2,7 @@ algorithm_introduction ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 image_classification_en.md metric_learning_en.md diff --git a/docs/en/data_preparation/index.rst b/docs/en/data_preparation/index.rst index 589157255e83405efe03003036773742244f06ba..e668126ecc8b1506d2008f0bfe7c8abf2678f357 100644 --- a/docs/en/data_preparation/index.rst +++ b/docs/en/data_preparation/index.rst @@ -2,7 +2,7 @@ data_preparation ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 recognition_dataset_en.md classification_dataset_en.md diff --git a/docs/en/doc_en.rst b/docs/en/doc_en.rst index ee21fcc7b8f88455cfe70b759323e563bef06903..aa25829f5c811d8419dbcb18558584c2cb5a29fe 100644 --- a/docs/en/doc_en.rst +++ b/docs/en/doc_en.rst @@ -15,7 +15,6 @@ Welcome to PaddleClas! algorithm_introduction/index advanced_tutorials/index others/index - extension/index faq_series/index diff --git a/docs/en/extension/VisualDL_en.md b/docs/en/extension/VisualDL_en.md deleted file mode 100644 index 403a74f496e98f7e290972e489014a787ee13d0d..0000000000000000000000000000000000000000 --- a/docs/en/extension/VisualDL_en.md +++ /dev/null @@ -1,44 +0,0 @@ -# Use VisualDL to visualize the training - -## Preface -VisualDL, a visualization analysis tool of PaddlePaddle, provides a variety of charts to show the trends of parameters, and visualizes model structures, data samples, histograms of tensors, PR curves , ROC curves and high-dimensional data distributions. It enables users to understand the training process and the model structure more clearly and intuitively so as to optimize models efficiently. For more information, please refer to [VisualDL](https://github.com/PaddlePaddle/VisualDL/). - -## Use VisualDL in PaddleClas -Now PaddleClas support use VisualDL to visualize the changes of learning rate, loss, accuracy in training. - -### Set config and start training -You only need to set the field `Global.use_visualdl` to `True` in train config: - -```yaml -# config.yaml -Global: -... - use_visualdl: True -... -``` - -PaddleClas will save the VisualDL logs to subdirectory `vdl/` under the output directory specified by `Global.output_dir`. And then you just need to start training normally: - -```shell -python3 tools/train.py -c config.yaml -``` - -### Start VisualDL -After starting the training program, you can start the VisualDL service in a new terminal session: - -```shell - visualdl --logdir ./output/vdl/ -``` - -In the above command, `--logdir` specify the directory of the VisualDL logs produced in training. VisualDL will traverse and iterate to find the subdirectories of the specified directory to visualize all the experimental results. You can also use the following parameters to set the IP and port number of the VisualDL service: - -* `--host`:ip, default is 127.0.0.1 -* `--port`:port, default is 8040 - -More information about the command,please refer to [VisualDL](https://github.com/PaddlePaddle/VisualDL/blob/develop/README.md#2-launch-panel). - -Then you can enter the address `127.0.0.1:8840` and view the training process in the browser: - - -![](../../images/VisualDL/train_loss.png) - diff --git a/docs/en/extension/index.rst b/docs/en/extension/index.rst deleted file mode 100644 index 880a85c1b783f2c94873d05bcc1abe27a542d24d..0000000000000000000000000000000000000000 --- a/docs/en/extension/index.rst +++ /dev/null @@ -1,13 +0,0 @@ -extension -================================ - -.. toctree:: - :maxdepth: 1 - - train_with_DALI_en.md - VisualDL_en.md - paddle_mobile_inference_en.md - paddle_serving_en.md - paddle_quantization_en.md - paddle_hub_en.md - multi_machine_training_en.md diff --git a/docs/en/extension/multi_machine_training_en.md b/docs/en/extension/multi_machine_training_en.md deleted file mode 100644 index d4fb997842717a7342f4acc457229aa07d290365..0000000000000000000000000000000000000000 --- a/docs/en/extension/multi_machine_training_en.md +++ /dev/null @@ -1,11 +0,0 @@ -# Distributed Training - -Distributed deep neural networks training is highly efficient in PaddlePaddle. -And it is one of the PaddlePaddle's core advantage technologies. -On image classification tasks, distributed training can achieve almost linear acceleration ratio. -[Fleet](https://github.com/PaddlePaddle/Fleet) is High-Level API for distributed training in PaddlePaddle. -By using Fleet, a user can shift from local machine paddlepaddle code to distributed code easily. -In order to support both single-machine training and multi-machine training, -[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) uses the Fleet API interface. -For more information about distributed training, -please refer to [Fleet API documentation](https://github.com/PaddlePaddle/Fleet/blob/develop/README.md). diff --git a/docs/en/extension/paddle_hub_en.md b/docs/en/extension/paddle_hub_en.md deleted file mode 100644 index d9d833a01cf26df92a040881827d7d855d75ef9a..0000000000000000000000000000000000000000 --- a/docs/en/extension/paddle_hub_en.md +++ /dev/null @@ -1,6 +0,0 @@ -# Paddle Hub - -[PaddleHub](https://github.com/PaddlePaddle/PaddleHub) is a pre-trained model application tool for PaddlePaddle. -Developers can conveniently use the high-quality pre-trained model combined with Fine-tune API to quickly complete the whole process from model migration to deployment. -All the pre-trained models of [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) have been collected by PaddleHub. -For further details, please refer to [PaddleHub website](https://www.paddlepaddle.org.cn/hub). diff --git a/docs/en/extension/paddle_mobile_inference_en.md b/docs/en/extension/paddle_mobile_inference_en.md deleted file mode 100644 index 86c8e040d139e30795f6d71a98a0f5b1c277b851..0000000000000000000000000000000000000000 --- a/docs/en/extension/paddle_mobile_inference_en.md +++ /dev/null @@ -1,114 +0,0 @@ -# Paddle-Lite - -## Introduction - -[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) is a set of lightweight inference engine which is fully functional, easy to use and then performs well. Lightweighting is reflected in the use of fewer bits to represent the weight and activation of the neural network, which can greatly reduce the size of the model, solve the problem of limited storage space of the mobile device, and the inference speed is better than other frameworks on the whole. - -In [PaddleClas](https://github.com/PaddlePaddle/PaddleClas), we uses Paddle-Lite to [evaluate the performance on the mobile device](../models/Mobile_en.md), in this section we uses the `MobileNetV1` model trained on the `ImageNet1k` dataset as an example to introduce how to use `Paddle-Lite` to evaluate the model speed on the mobile terminal (evaluated on SD855) - -## Evaluation Steps - -### Export the Inference Model - -* First you should transform the saved model during training to the special model which can be used to inference, the special model can be exported by `tools/export_model.py`, the specific way of transform is as follows. - -```shell -python tools/export_model.py -m MobileNetV1 -p pretrained/MobileNetV1_pretrained/ -o inference/MobileNetV1 -``` - -Finally the `model` and `parmas` can be saved in `inference/MobileNetV1`. - - -### Download Benchmark Binary File - -* Use the adb (Android Debug Bridge) tool to connect the Android phone and the PC, then develop and debug. After installing adb and ensuring that the PC and the phone are successfully connected, use the following command to view the ARM version of the phone and select the pre-compiled library based on ARM version. - -```shell -adb shell getprop ro.product.cpu.abi -``` - -* Download Benchmark_bin File - -```shell -wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/benchmark_bin_v8 -``` - -If the ARM version is v7, the v7 benchmark_bin file should be downloaded, the command is as follow. - -```shell -wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/benchmark_bin_v7 -``` - -### Inference benchmark - -After the PC and mobile phone are successfully connected, use the following command to start the model evaluation. - -``` -sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true -``` - -Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows. - -``` -PaddleLite Benchmark -Threads=1 Warmup=10 Repeats=30 -MobileNetV1 min = 30.89100 max = 30.73600 average = 30.79750 - -Threads=2 Warmup=10 Repeats=30 -MobileNetV1 min = 18.26600 max = 18.14000 average = 18.21637 - -Threads=4 Warmup=10 Repeats=30 -MobileNetV1 min = 10.03200 max = 9.94300 average = 9.97627 -``` - -Here is the model inference speed under different number of threads, the unit is FPS, taking model on one threads as an example, the average speed of MobileNetV1 on SD855 is `30.79750FPS`. - -### Model Optimization and Speed Evaluation - -* In II.III section, we mention that the model will be optimized before evaluation, here you can first optimize the model, and then directly load the optimized model for speed evaluation - -* Paddle-Lite -In Paddle-Lite, we provides multiple strategies to automatically optimize the original training model, which contain Quantify, Subgraph fusion, Hybrid scheduling, Kernel optimization and so on. In order to make the optimization more convenient and easy to use, we provide opt tools to automatically complete the optimization steps and output a lightweight, optimal and executable model in Paddle-Lite, which can be downloaded on [Paddle-Lite Model Optimization Page](https://paddle-lite.readthedocs.io/zh/latest/user_guides/model_optimize_tool.html). Here we take `MacOS` as our development environment, download[opt_mac](https://paddlelite-data.bj.bcebos.com/model_optimize_tool/opt_mac) model optimization tools and use the following commands to optimize the model. - - -```shell -model_file="../MobileNetV1/model" -param_file="../MobileNetV1/params" -opt_models_dir="./opt_models" -mkdir ${opt_models_dir} -./opt_mac --model_file=${model_file} \ - --param_file=${param_file} \ - --valid_targets=arm \ - --optimize_out_type=naive_buffer \ - --prefer_int8_kernel=false \ - --optimize_out=${opt_models_dir}/MobileNetV1 -``` - -Where the `model_file` and `param_file` are exported model file and the file address respectively, after transforming successfully, the `MobileNetV1.nb` will be saved in `opt_models` - - - -Use the benchmark_bin file to load the optimized model for evaluation. The commands are as follows. - -```shell -bash benchmark.sh ./benchmark_bin_v8 ./opt_models result_armv8.txt -``` - -Finally the result is saved in `result_armv8.txt` and shown as follow. - -``` -PaddleLite Benchmark -Threads=1 Warmup=10 Repeats=30 -MobileNetV1_lite min = 30.89500 max = 30.78500 average = 30.84173 - -Threads=2 Warmup=10 Repeats=30 -MobileNetV1_lite min = 18.25300 max = 18.11000 average = 18.18017 - -Threads=4 Warmup=10 Repeats=30 -MobileNetV1_lite min = 10.00600 max = 9.90000 average = 9.96177 -``` - - -Taking the model on one threads as an example, the average speed of MobileNetV1 on SD855 is `30.84173FPS`. - -More specific parameter explanation and Paddle-Lite usage can refer to [Paddle-Lite docs](https://paddle-lite.readthedocs.io/zh/latest/)。 diff --git a/docs/en/extension/paddle_quantization_en.md b/docs/en/extension/paddle_quantization_en.md deleted file mode 100644 index e84e3a820e585e974d7a011a0a1a33a89298f90b..0000000000000000000000000000000000000000 --- a/docs/en/extension/paddle_quantization_en.md +++ /dev/null @@ -1,12 +0,0 @@ -# Model Quantifization - -Int8 quantization is one of the key features in [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim). -It supports two kinds of training aware, **Dynamic strategy** and **Static strategy**, -layer-wise and channel-wise quantization, -and using PaddleLite to deploy models generated by PaddleSlim. - -By using this toolkit, [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) quantized the mobilenet_v3_large_x1_0 model whose accuracy is 78.9% after distilled. -After quantized, the prediction speed is accelerated from 19.308ms to 14.395ms on SD855. -The storage size is reduced from 21M to 10M. -The top1 recognition accuracy rate is 75.9%. -For specific training methods, please refer to [PaddleSlim quant aware](../../../deploy/slim/README_en.md)。 diff --git a/docs/en/extension/paddle_serving_en.md b/docs/en/extension/paddle_serving_en.md deleted file mode 100644 index 3ad259526a411c46f3e25207ebf77aa45198d67e..0000000000000000000000000000000000000000 --- a/docs/en/extension/paddle_serving_en.md +++ /dev/null @@ -1,64 +0,0 @@ -# Model Service Deployment - -## Overview -[Paddle Serving](https://github.com/PaddlePaddle/Serving) aims to help deep-learning researchers to easily deploy online inference services, supporting one-click deployment of industry, high concurrency and efficient communication between client and server and supporting multiple programming languages to develop clients. - -Taking HTTP inference service deployment as an example to introduce how to use PaddleServing to deploy model services in PaddleClas. - -## Serving Install - -It is recommends to use docker to install and deploy the Serving environment in the Serving official website, first, you need to pull the docker environment and create Serving-based docker. - -```shell -nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu -nvidia-docker exec -it test bash -``` - -In docker, you need to install some packages about Serving - -```shell -pip install paddlepaddle-gpu -pip install paddle-serving-client -pip install paddle-serving-server-gpu -``` - -* If the installation speed is too slow, you can add `-i https://pypi.tuna.tsinghua.edu.cn/simple` following pip to speed up the process. - -* If you want to deploy CPU service, you can install the cpu version of Serving, the command is as follow. - -```shell -pip install paddle-serving-server -``` - -### Export Model - -Exporting the Serving model using `tools/export_serving_model.py`, taking ResNet50_vd as an example, the command is as follow. - -```shell -python tools/export_serving_model.py -m ResNet50_vd -p ./pretrained/ResNet50_vd_pretrained/ -o serving -``` - -finally, the client configures, model parameters and structure file will be saved in `ppcls_client_conf` and `ppcls_model`. - - -### Service Deployment and Request - -* Using the following commands to start the Serving. - -```shell -python tools/serving/image_service_gpu.py serving/ppcls_model workdir 9292 -``` - -`serving/ppcls_model` is the address of the Serving model just saved, `workdir` is the work directory, and `9292` is the port of the service. - - -* Using the following script to send an identification request to the Serving and return the result. - -``` -python tools/serving/image_http_client.py 9292 ./docs/images/logo.png -``` - -`9292` is the port for sending the request, which is consistent with the Serving starting port, and `./docs/images/logo.png` is the test image, the final top1 label and probability are returned. - -* For more Serving deployment, such RPC inference service, you can refer to the Serving official website: [https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/imagenet) diff --git a/docs/en/extension/train_with_DALI_en.md b/docs/en/extension/train_with_DALI_en.md deleted file mode 100644 index a67a76166b0d890d69f5e2a8cd14c68b146c785b..0000000000000000000000000000000000000000 --- a/docs/en/extension/train_with_DALI_en.md +++ /dev/null @@ -1,62 +0,0 @@ -# Train with DALI - -## Preface -[The NVIDIA Data Loading Library](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html) (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It can build Dataloader of Paddle. - -Since the Deep learning relies on a large amount of data in the training stage, these data need to be loaded and preprocessed. These operations are usually executed on the CPU, which limits the further improvement of the training speed, especially when the batch_size is large, which become the bottleneck of speed. DALI can use GPU to accelerate these operations, thereby further improve the training speed. - -## Installing DALI -DALI only support Linux x64 and version of CUDA is 10.2 or later. - -* For CUDA 10: - - pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda100 - -* For CUDA 11.0: - - pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda110 - -For more information about installing DALI, please refer to [DALI](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/installation.html). - -## Using DALI -Paddleclas supports training with DALI in static graph. Since DALI only supports GPU training, `CUDA_VISIBLE_DEVICES` needs to be set, and DALI needs to occupy GPU memory, so it needs to reserve GPU memory for Dali. To train with DALI, just set the fields in the training config `use_dali = True`, or start the training by the following command: - -```shell -# set the GPUs that can be seen -export CUDA_VISIBLE_DEVICES="0" - -# set the GPU memory used for neural network training, generally 0.8 or 0.7, and the remaining GPU memory is reserved for DALI -export FLAGS_fraction_of_gpu_memory_to_use=0.80 - -python tools/static/train.py -c configs/ResNet/ResNet50.yaml -o use_dali=True -``` - -And you can train with muti-GPUs: - -```shell -# set the GPUs that can be seen -export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" - -# set the GPU memory used for neural network training, generally 0.8 or 0.7, and the remaining GPU memory is reserved for DALI -export FLAGS_fraction_of_gpu_memory_to_use=0.80 - -python -m paddle.distributed.launch \ - --gpus="0,1,2,3,4,5,6,7" \ - tools/static/train.py \ - -c ./configs/ResNet/ResNet50.yaml \ - -o use_dali=True -``` - -## Train with FP16 - -On the basis of the above, using FP16 half-precision can further improve the training speed, you can refer to the following command. - -```shell -export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -export FLAGS_fraction_of_gpu_memory_to_use=0.8 - -python -m paddle.distributed.launch \ - --gpus="0,1,2,3,4,5,6,7" \ - tools/static/train.py \ - -c configs/ResNet/ResNet50_fp16.yaml -``` diff --git a/docs/en/faq_series/index.rst b/docs/en/faq_series/index.rst index 106fc0708979d74862dc8dbbc64bdaf5c0c9c6aa..69a9f2d2a6fdad7a30c49fb000b93caf28ffa3a1 100644 --- a/docs/en/faq_series/index.rst +++ b/docs/en/faq_series/index.rst @@ -2,7 +2,7 @@ faq_series ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 faq_2021_s2_en.md faq_2021_s1_en.md diff --git a/docs/en/image_recognition_pipeline/index.rst b/docs/en/image_recognition_pipeline/index.rst index 1e87e37e146e1fd094cf05082bc68db7248f9190..95aa67d4caca1a0c9bb6ff629fde764dc41d379d 100644 --- a/docs/en/image_recognition_pipeline/index.rst +++ b/docs/en/image_recognition_pipeline/index.rst @@ -2,7 +2,7 @@ image_recognition_pipeline ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 mainbody_detection_en.md feature_extraction_en.md diff --git a/docs/en/index.rst b/docs/en/index.rst index 30cc73b43abd67f3a05bc792bcfd43bc3882205e..0bb50d3dc0e14a02cf37af99606ce5817245a24b 100644 --- a/docs/en/index.rst +++ b/docs/en/index.rst @@ -5,7 +5,6 @@ :maxdepth: 2 models_training/index - extension/index introduction/index image_recognition_pipeline/index others/index diff --git a/docs/en/inference_deployment/index.rst b/docs/en/inference_deployment/index.rst index 16f9d5557d88745144ace3ac465f01ee9a8d2117..1e3c344e46c5328bdd32d7c987d7e44d63a9478f 100644 --- a/docs/en/inference_deployment/index.rst +++ b/docs/en/inference_deployment/index.rst @@ -2,7 +2,7 @@ inference_deployment ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 export_model_en.md python_deploy_en.md diff --git a/docs/en/installation/index.rst b/docs/en/installation/index.rst index 832675a6b6265ad860dece20f466bd6f981d0629..39d432ae7098709006ef8326d818f67623bf2fac 100644 --- a/docs/en/installation/index.rst +++ b/docs/en/installation/index.rst @@ -2,7 +2,7 @@ installation ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 install_paddle_en.md install_paddleclas_en.md diff --git a/docs/en/installation/install_paddle_en.md b/docs/en/installation/install_paddle_en.md index 3922b7325da66e53c6dd1b1b31f0dc6966d2252f..c282f3ed602e7988b4cd7533e1a32218a77aa7fe 100644 --- a/docs/en/installation/install_paddle_en.md +++ b/docs/en/installation/install_paddle_en.md @@ -1,4 +1,4 @@ -# Installation PaddlePaddle +# Install PaddlePaddle --- @@ -9,7 +9,7 @@ - [3. Install PaddlePaddle using pip](#3) - [4. Verify installation](#4) -At present, **PaddleClas** requires **PaddlePaddle** version **>=2.0**. Docker is recomended to run Paddleclas, for more detailed information about docker and nvidia-docker, you can refer to the [tutorial](https://docs.docker.com/get-started/). If you do not want to use docker, you can skip section [2. (Recommended) Prepare a docker environment](#2), and go into section [3. Install PaddlePaddle using pip](#3). +At present, **PaddleClas** requires **PaddlePaddle** version `>=2.0`. Docker is recomended to run Paddleclas, for more detailed information about docker and nvidia-docker, you can refer to the [tutorial](https://docs.docker.com/get-started/). If you do not want to use docker, you can skip section [2. (Recommended) Prepare a docker environment](#2), and go into section [3. Install PaddlePaddle using pip](#3). @@ -96,5 +96,5 @@ python -c "import paddle; print(paddle.__version__)" Note: * Make sure the compiled source code is later than PaddlePaddle2.0. -* Indicate **WITH_DISTRIBUTE=ON** when compiling, Please refer to [Instruction](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/install/Tables.html#id3) for more details. +* Indicate `WITH_DISTRIBUTE=ON` when compiling, Please refer to [Instruction](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/install/Tables.html#id3) for more details. * When running in docker, in order to ensure that the container has enough shared memory for dataloader acceleration of Paddle, please set the parameter `--shm-size=8g` at creating a docker container, if conditions permit, you can set it to a larger value. diff --git a/docs/en/introduction/index.rst b/docs/en/introduction/index.rst index 7b8fdf358355a07e33219ccc86f2442346f151be..e22a647e856ecec44d261a363e8721a0e3caed68 100644 --- a/docs/en/introduction/index.rst +++ b/docs/en/introduction/index.rst @@ -2,7 +2,7 @@ introduction ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 function_intro_en.md more_demo/index diff --git a/docs/en/models/index.rst b/docs/en/models/index.rst index abeee98c30d4370d158489b30d94d31b4d66e9be..4642eb1de4cd93daf58fffd9e08e476f2b0968d3 100644 --- a/docs/en/models/index.rst +++ b/docs/en/models/index.rst @@ -2,29 +2,29 @@ models ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 - PP-LCNet_en.md - SEResNext_and_Res2Net_en.md - ReXNet_en.md - Others_en.md - Twins_en.md - Inception_en.md - HarDNet_en.md + DPN_DenseNet_en.md + models_intro_en.md + RepVGG_en.md EfficientNet_and_ResNeXt101_wsl_en.md + ViT_and_DeiT_en.md + SwinTransformer_en.md + Others_en.md + SEResNext_and_Res2Net_en.md ESNet_en.md HRNet_en.md - RepVGG_en.md + ReXNet_en.md + Inception_en.md + TNT_en.md RedNet_en.md - Mobile_en.md + DLA_en.md ResNeSt_RegNet_en.md + PP-LCNet_en.md + HarDNet_en.md ResNet_and_vd_en.md - models_intro_en.md - TNT_en.md - ViT_and_DeiT_en.md LeViT_en.md - DLA_en.md - PVTV2_en.md - DPN_DenseNet_en.md + Mobile_en.md MixNet_en.md - SwinTransformer_en.md + Twins_en.md + PVTV2_en.md diff --git a/docs/en/models_training/index.rst b/docs/en/models_training/index.rst index 687cb73932a42c339ac00900655d6987683e342d..1d27e65bd5242763c8cdc02edec53161065c879c 100644 --- a/docs/en/models_training/index.rst +++ b/docs/en/models_training/index.rst @@ -2,7 +2,7 @@ models_training ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 config_description_en.md recognition_en.md diff --git a/docs/en/others/index.rst b/docs/en/others/index.rst index e1c98e30c77deef877d016622e75ab0bd5a08549..d106343c871e4e5741cf3d2434562c9f220fa7bf 100644 --- a/docs/en/others/index.rst +++ b/docs/en/others/index.rst @@ -2,7 +2,7 @@ others ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 transfer_learning_en.md train_with_DALI_en.md diff --git a/docs/en/others/paddle_mobile_inference_en.md b/docs/en/others/paddle_mobile_inference_en.md index 153c96ffa44284742afa1acb8f9efe4e597414bd..4cb73179033ab54de7d09a8d173066f627820401 100644 --- a/docs/en/others/paddle_mobile_inference_en.md +++ b/docs/en/others/paddle_mobile_inference_en.md @@ -1,4 +1,4 @@ -# Paddle-Lite +# Benchmark on Mobile --- diff --git a/docs/en/quick_start/index.rst b/docs/en/quick_start/index.rst index 4380b8b1f838208da19ddc53f08d2a3dd670fe21..2edd78ddec35abbb56e6b3a258cf3b16356fd136 100644 --- a/docs/en/quick_start/index.rst +++ b/docs/en/quick_start/index.rst @@ -2,7 +2,7 @@ quick_start ================================ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 quick_start_classification_new_user_en.md quick_start_classification_professional_en.md