diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8e221ad1bf3f7d3481cf27c7bf8dc3ab4a6957d3..d0c06e6ccf443f1aae038833429633fe0b4d458d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ # Contribute Code -You are welcome to contribute to project PaddlePaddle. To contribute to PaddlePaddle, you have to agree with the +You are welcome to contribute to project PaddlePaddle. To contribute to PaddlePaddle, you have to agree with the [PaddlePaddle Contributor License Agreement](https://gist.github.com/XiaoguangHu01/75018ad8e11af13df97070dd18ae6808). We sincerely appreciate your contribution. This document explains our workflow and work style. diff --git a/README.md b/README.md index 4437045721287418d87754ca9733c6363f70f6e6..856154ce3ce0aa88b1d466cfe3049e006b26ce32 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@

- + -------------------------------------------------------------------------------- English | [简体中文](./README_cn.md) @@ -52,13 +52,13 @@ Now our developers can acquire Tesla V100 online computing resources for free. I - **High-Performance Inference Engines for Comprehensive Deployment Environments** PaddlePaddle is not only compatible with models trained in 3rd party open-source frameworks , but also offers complete inference products for various production scenarios. Our inference product line includes [Paddle Inference](https://paddle-inference.readthedocs.io/en/master/guides/introduction/index_intro.html): Native inference library for high-performance server and cloud inference; [Paddle Serving](https://github.com/PaddlePaddle/Serving): A service-oriented framework suitable for distributed and pipeline productions; [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite): Ultra-Lightweight inference engine for mobile and IoT environments; [Paddle.js](https://www.paddlepaddle.org.cn/paddle/paddlejs): A frontend inference engine for browser and mini-apps. Furthermore, by great amounts of optimization with leading hardware in each scenario, Paddle inference engines outperform most of the other mainstream frameworks. - - + + - **Industry-Oriented Models and Libraries with Open Source Repositories** PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications. [Click here to learn more](https://github.com/PaddlePaddle/models) - + ## Documentation @@ -71,7 +71,7 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide - [Practice](https://www.paddlepaddle.org.cn/documentation/docs/zh/tutorial/index_cn.html) - So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator. + So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator. - [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/en/api/index_en.html) @@ -86,11 +86,11 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide - [Github Issues](https://github.com/PaddlePaddle/Paddle/issues): bug reports, feature requests, install issues, usage issues, etc. - QQ discussion group: 441226485 (PaddlePaddle). - [Forums](https://aistudio.baidu.com/paddle/forum): discuss implementations, research, etc. - + ## Courses - [Server Deployments](https://aistudio.baidu.com/aistudio/course/introduce/19084): Courses intorducing high performance server deployments via local and remote services. -- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses intorducing edge deployments from mobile, IoT to web and applets. +- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses intorducing edge deployments from mobile, IoT to web and applets. ## Copyright and License PaddlePaddle is provided under the [Apache-2.0 license](LICENSE). diff --git a/README_cn.md b/README_cn.md index f4cb6f4fff78eb9a3ab3f3a2abd1f8fc80cd9e4e..13f42764ba3d0e811fe8dc667b5933c80f83085e 100644 --- a/README_cn.md +++ b/README_cn.md @@ -39,13 +39,13 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型 - **开发便捷的产业级深度学习框架** 飞桨深度学习框架采用基于编程逻辑的组网范式,对于普通开发者而言更容易上手,符合他们的开发习惯。同时支持声明式和命令式编程,兼具开发的灵活性和高性能。网络结构自动设计,模型效果超越人类专家。 - + - **支持超大规模深度学习模型的训练** 飞桨突破了超大规模深度学习模型训练技术,实现了支持千亿特征、万亿参数、数百节点的开源大规模训练平台,攻克了超大规模深度学习模型的在线学习难题,实现了万亿规模参数模型的实时更新。 [查看详情](https://github.com/PaddlePaddle/Fleet) - + - **支持多端多平台的高性能推理部署工具** @@ -66,14 +66,14 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型 - [使用指南](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/index_cn.html) 或许您想从深度学习基础开始学习飞桨 - + - [应用实践](https://www.paddlepaddle.org.cn/documentation/docs/zh/tutorial/index_cn.html) - + - [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/index_cn.html) 新的API支持代码更少更简洁的程序 - + - [贡献方式](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/08_contribution/index_cn.html) @@ -84,7 +84,7 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型 - 欢迎您通过[Github Issues](https://github.com/PaddlePaddle/Paddle/issues)来提交问题、报告与建议 - QQ群: 441226485 (PaddlePaddle) - [论坛](https://aistudio.baidu.com/paddle/forum): 欢迎大家在PaddlePaddle论坛分享在使用PaddlePaddle中遇到的问题和经验, 营造良好的论坛氛围 - + ## 课程 - [服务器部署](https://aistudio.baidu.com/aistudio/course/introduce/19084): 详细介绍高性能服务器端部署实操,包含本地端及服务化Serving部署等 diff --git a/SECURITY.md b/SECURITY.md index 97b092d6dfc01863a0b9143a2f8ad00a35a75752..d06ec8727b95b134c47b054864e98838ec497b4d 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -48,7 +48,7 @@ We will indicate the bug fix in the release of PaddlePaddle, and publish the vul ### What is a vulnerability? -In the process of computation graphs in PaddlePaddle, models can perform arbitrary computations , including reading and writing files, communicating with the network, etc. It may cause memory exhaustion, deadlock, etc., which will lead to unexpected behavior of PaddlePaddle. We consider these behavior to be security vulnerabilities only if they are out of the intention of the operation involved. +In the process of computation graphs in PaddlePaddle, models can perform arbitrary computations , including reading and writing files, communicating with the network, etc. It may cause memory exhaustion, deadlock, etc., which will lead to unexpected behavior of PaddlePaddle. We consider these behavior to be security vulnerabilities only if they are out of the intention of the operation involved. diff --git a/doc/README.md b/doc/README.md index 998a39f10699af6d1a391f177a5cf03c9ae170fd..6709ab4cb67acbb3ca2737348eec77a677da7e97 100644 --- a/doc/README.md +++ b/doc/README.md @@ -1,6 +1,6 @@ # For Readers and Developers -Thanks for reading PaddlePaddle documentation. +Thanks for reading PaddlePaddle documentation. Since **September 17th, 2018**, the **0.15.0 and develop** documentation source has been moved to [FluidDoc Repo](https://github.com/PaddlePaddle/FluidDoc) and updated there. diff --git a/paddle/fluid/distributed/ps/service/README.md b/paddle/fluid/distributed/ps/service/README.md index a219e92c63b75004f990f9886a7ebd28da517085..51abf9081118e461b538738a20f4d8f38e456eaa 100755 --- a/paddle/fluid/distributed/ps/service/README.md +++ b/paddle/fluid/distributed/ps/service/README.md @@ -1,6 +1,6 @@ # 目录说明 -* PSServer +* PSServer * PSClient * PsService * Communicator diff --git a/paddle/fluid/inference/analysis/README.md b/paddle/fluid/inference/analysis/README.md index 9a53ce53ab6a756af666de99c8729bf3da2e4a09..bc63ce169c98862f064301ea335fa0a5fb6b93a5 100644 --- a/paddle/fluid/inference/analysis/README.md +++ b/paddle/fluid/inference/analysis/README.md @@ -1,7 +1,7 @@ # Inference Analysis The `inference/analysis` module is used to analyze and optimize the inference program, -it references some philosophy from `LLVM/analysis`, +it references some philosophy from `LLVM/analysis`, and make the various optimization features be pluggable and co-exist in a pipeline. We borrowed some concepts from LLVM, such as @@ -31,14 +31,14 @@ each pass will generate unified debug information or visualization for better de ## Supported Passes ### `FluidToDataFlowGraphPass` -Transform the fluid `ProgramDesc` to a `DataFlowGraph` to give an abstract representation for all the middle passes, +Transform the fluid `ProgramDesc` to a `DataFlowGraph` to give an abstract representation for all the middle passes, this should be the first pass of the pipeline. ### `DataFlowGraphToFluidPass` Generate a final `ProgramDesc` from a data flow graph, this should be the last pass of the pipeline. ### `TensorRTSubgraphNodeMarkPass` -Mark the `Node` that are supported by TensorRT, +Mark the `Node` that are supported by TensorRT, this pass will generate a visualization file which can be used for debugging. ### `TensorRTSubGraphPass` diff --git a/paddle/fluid/inference/api/README.md b/paddle/fluid/inference/api/README.md index a2d685d723bd9ab2b84969adb86e177a8754328d..990b061c8f92b01a593241217b24b5b3204c9121 100644 --- a/paddle/fluid/inference/api/README.md +++ b/paddle/fluid/inference/api/README.md @@ -9,7 +9,7 @@ You can easily deploy a model trained by Paddle following the steps as below: ## The APIs -All the released APIs are located in the `paddle_inference_api.h` header file. +All the released APIs are located in the `paddle_inference_api.h` header file. The stable APIs are wrapped by `namespace paddle`, the unstable APIs are protected by `namespace paddle::contrib`. ## Write some codes diff --git a/paddle/fluid/inference/api/demo_ci/README.md b/paddle/fluid/inference/api/demo_ci/README.md index 928ff84baac5eaef3d60a73d6dfdf93b078c2117..bb316753945c3a7ab18c52a6007b7a6a357696bc 100644 --- a/paddle/fluid/inference/api/demo_ci/README.md +++ b/paddle/fluid/inference/api/demo_ci/README.md @@ -2,11 +2,11 @@ There are several demos: -- simple_on_word2vec: - - Follow the C++ codes is in `simple_on_word2vec.cc`. +- simple_on_word2vec: + - Follow the C++ codes is in `simple_on_word2vec.cc`. - It is suitable for word2vec model. -- vis_demo: - - Follow the C++ codes is in `vis_demo.cc`. +- vis_demo: + - Follow the C++ codes is in `vis_demo.cc`. - It is suitable for mobilenet, se_resnext50 and ocr three models. - Input data format: - Each line contains a single record @@ -15,7 +15,7 @@ There are several demos: \t ``` -To build and execute the demos, simply run +To build and execute the demos, simply run ``` ./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU ``` diff --git a/paddle/fluid/inference/api/high_level_api.md b/paddle/fluid/inference/api/high_level_api.md index 5b90c7d369c57a9f5ebf76e201dbfcd7b9fc790e..ca22767a1b8895f365633541c86d24182a3268b1 100644 --- a/paddle/fluid/inference/api/high_level_api.md +++ b/paddle/fluid/inference/api/high_level_api.md @@ -6,7 +6,7 @@ The APIs are described in `paddle_inference_api.h`, just one header file, and tw ## PaddleTensor We provide the `PaddleTensor` data structure to give a general tensor interface. -The definition is +The definition is ```c++ struct PaddleTensor { @@ -17,8 +17,8 @@ struct PaddleTensor { }; ``` -The data is stored in a continuous memory `PaddleBuf,` and a `PaddleDType` specifies tensor's data type. -The `name` field is used to specify the name of an input variable, +The data is stored in a continuous memory `PaddleBuf,` and a `PaddleDType` specifies tensor's data type. +The `name` field is used to specify the name of an input variable, that is important when there are multiple inputs and need to distinguish which variable to set. ## engine @@ -38,7 +38,7 @@ enum class PaddleEngineKind { ``` ## PaddlePredictor and how to create one -The main interface is `PaddlePredictor,` there are following methods +The main interface is `PaddlePredictor,` there are following methods - `bool Run(const std::vector& inputs, std::vector* output_data)` - take inputs and output `output_data.` diff --git a/paddle/fluid/inference/api/high_level_api_cn.md b/paddle/fluid/inference/api/high_level_api_cn.md index 0d420f3369742a73f2592cd780ed4bc3bebe9439..6fb4a55f200ebc8c34de4b3266f7021e2e97c312 100644 --- a/paddle/fluid/inference/api/high_level_api_cn.md +++ b/paddle/fluid/inference/api/high_level_api_cn.md @@ -5,7 +5,7 @@ 预测库包含: - 头文件 `paddle_inference_api.h` 定义了所有的接口 -- 库文件 `libpaddle_inference.so/.a(Linux/Mac)` `libpaddle_inference.lib/paddle_inference.dll(Windows)` +- 库文件 `libpaddle_inference.so/.a(Linux/Mac)` `libpaddle_inference.lib/paddle_inference.dll(Windows)` 下面是详细的一些 API 概念介绍 diff --git a/paddle/fluid/inference/tests/api/int8_mkldnn_quantization.md b/paddle/fluid/inference/tests/api/int8_mkldnn_quantization.md index 51870e804144a002b15750ee8c8fa9f4c0af40dc..795f382f0c42e5b5f3c81febd137070726ded548 100644 --- a/paddle/fluid/inference/tests/api/int8_mkldnn_quantization.md +++ b/paddle/fluid/inference/tests/api/int8_mkldnn_quantization.md @@ -126,7 +126,7 @@ MODEL_NAME=googlenet, mobilenetv1, mobilenetv2, resnet101, resnet50, vgg16, vgg1 * ## Prepare dataset * Download and preprocess the full Pascal VOC2007 test set. - + ```bash cd /PATH/TO/PADDLE python paddle/fluid/inference/tests/api/full_pascalvoc_test_preprocess.py diff --git a/paddle/fluid/inference/tests/infer_ut/README.md b/paddle/fluid/inference/tests/infer_ut/README.md index 886c9f1eb1484adc6dcb2ffe3c9415bc53392d0b..94e2665d7759d4d14f5bb18430ac3bfdcb4fa490 100644 --- a/paddle/fluid/inference/tests/infer_ut/README.md +++ b/paddle/fluid/inference/tests/infer_ut/README.md @@ -9,7 +9,7 @@ There are several model tests currently: - test_resnet50_quant.cc - test_yolov3.cc -To build and execute tests on Linux, simply run +To build and execute tests on Linux, simply run ``` ./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU $DATA_DIR ``` @@ -24,7 +24,7 @@ busybox bash ./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU $DATA_DIR - `$TEST_GPU_CPU`: test both GPU/CPU mode or only CPU mode - `$DATA_DIR`: download data path -now only support 4 kinds of tests which controled by `--gtest_filter` argument, test suite name should be same as following. +now only support 4 kinds of tests which controled by `--gtest_filter` argument, test suite name should be same as following. - `TEST(gpu_tester_*, test_name)` - `TEST(cpu_tester_*, test_name)` - `TEST(mkldnn_tester_*, test_name)` diff --git a/paddle/fluid/operators/jit/README.en.md b/paddle/fluid/operators/jit/README.en.md index 7d4dc6d47a512ee7ed75d99800968a38de98f090..713642c41eec42e1309c594ac8df7bf2e53bb9d3 100644 --- a/paddle/fluid/operators/jit/README.en.md +++ b/paddle/fluid/operators/jit/README.en.md @@ -4,7 +4,7 @@ JIT(Just In Time) Kernel contains actually generated code and some other impleme Each implementation has its own condition to use, defined in `CanBeUsed`. They are combined together to get the best performance of one single independent function. They could be some very simple functions like vector multiply, or some complicated functions like LSTM. -And they can be composed with some other exited jit kernels to build up a complex function. +And they can be composed with some other exited jit kernels to build up a complex function. Currently it's only supported on CPU yet. ## Contents @@ -38,14 +38,14 @@ All basical definations of jit kernels are addressed in `paddle/fluid/operators/ - `refer`: Each kernel must have one reference implementation on CPU, and it should only focus on the correctness and should not depends on any third-party libraries. - `gen`: The code generated should be kept here. They should be designed focusing on the best performance, which depends on Xbyak. -- `more`: All other implementations should be kept in this folder with one directory corresponding to one library kind or method kind, such as mkl, mkldnn, openblas or intrinsic code. Each implementation should have it advantage. +- `more`: All other implementations should be kept in this folder with one directory corresponding to one library kind or method kind, such as mkl, mkldnn, openblas or intrinsic code. Each implementation should have it advantage. ## How to use We present these methods to get the functions: - `GetAllCandidateFuncs`. It can return all the implementations supported. All of the implementations can get the same result. You can do some runtime benchmark to choose which should actually be used. - `GetDefaultBestFunc`. It only return one default function pointer, which is tuning offline with some genenal configures and attributes. This should cover most situations. -- `KernelFuncs::Cache()`. It can get the default functions and save it for next time with the same attribute. +- `KernelFuncs::Cache()`. It can get the default functions and save it for next time with the same attribute. - `GetReferFunc`. It can only get the reference code in CPU, and all the others implementations have same logic with this reference code. And here are some examples: @@ -86,7 +86,7 @@ All kernels are inlcuded in `paddle/fluid/operators/jit/kernels.h`, which is aut 1. Add `your_key` at `KernelType`. 2. Add your new `KernelTuple` which must include `your_key`. It should be a combination of the data type, attribute type and function type. You can refer `SeqPoolTuple`. -3. Add reference function of `your_key`. +3. Add reference function of `your_key`. Note: - this should be run on CPU and do not depend on any third-party. - Add `USE_JITKERNEL_REFER(your_key)` in `refer/CmakeLists.txt` to make sure this code can be used. diff --git a/paddle/scripts/README.md b/paddle/scripts/README.md index d7a86b653bec44c260a845d454c771ec4440993b..e9ec3d46a23daba9c671322c0fff85d4a8204864 100644 --- a/paddle/scripts/README.md +++ b/paddle/scripts/README.md @@ -15,7 +15,7 @@ PaddlePaddle applications directly in docker or on Kubernetes clusters. To achieve this, we maintain a dockerhub repo:https://hub.docker.com/r/paddlepaddle/paddle which provides pre-built environment images to build PaddlePaddle and generate corresponding `whl` -binaries.(**We strongly recommend building paddlepaddle in our pre-specified Docker environment.**) +binaries.(**We strongly recommend building paddlepaddle in our pre-specified Docker environment.**) ## Development Workflow @@ -52,8 +52,8 @@ cd Paddle After the build finishes, you can get output `whl` package under `build/python/dist`. -This command will download the most recent dev image from docker hub, start a container in the backend and then run the build script `/paddle/paddle/scripts/paddle_build.sh build` in the container. -The container mounts the source directory on the host into `/paddle`. +This command will download the most recent dev image from docker hub, start a container in the backend and then run the build script `/paddle/paddle/scripts/paddle_build.sh build` in the container. +The container mounts the source directory on the host into `/paddle`. When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed. ### Build Options diff --git a/paddle/scripts/musl_build/README.md b/paddle/scripts/musl_build/README.md index 9842971301b5ae08f2c31495e6eb093bd0be39fd..a1b90b4649b98d6f59c96d8432a5d4a595ab5241 100644 --- a/paddle/scripts/musl_build/README.md +++ b/paddle/scripts/musl_build/README.md @@ -7,7 +7,7 @@ Paddle can be built for linux-musl such as alpine, and be used in libos-liked SG # Build Automatically 1. clone paddle source from github - + ```bash git clone https://github.com/PaddlePaddle/Paddle.git ``` @@ -50,7 +50,7 @@ mkdir -p build && cd build ls ./output/*.whl ``` -# Build Manually +# Build Manually 1. start up the building docker, and enter the shell in the container ```bash @@ -88,7 +88,7 @@ make -j8 # Scripts 1. **build_docker.sh** compiling docker building script. it use alpine linux 3.10 as musl linux build enironment. it will try to install all the compiling tools, development packages, and python requirements for paddle musl compiling. - + environment variables: - PYTHON_VERSION: the version of python used for image building, default=3.7. - WITH_PRUNE_DAYS: prune old docker images, with days limitation. diff --git a/python/paddle/README.rst b/python/paddle/README.rst index 2d48ee4b26cafb9a07024ae77bbac6a2321b9b48..b84b54f1c3c42879c894b7a6a71362c908f60d93 100644 --- a/python/paddle/README.rst +++ b/python/paddle/README.rst @@ -48,7 +48,7 @@ We provide users with four installation methods ,which are pip, conda, docker an - **pip or pip3 version 9.0.1+ (64 bit)** - + #### Commands to install @@ -115,12 +115,12 @@ If you want to install witch conda or docker or pip,please see commands to insta PaddlePaddle is not only compatible with other open-source frameworks for models training, but also works well on the ubiquitous developments, varying from platforms to devices. More specifically, PaddlePaddle accelerates the inference procedure with the fastest speed-up. Note that, a recent breakthrough of inference speed has been made by PaddlePaddle on Huawei's Kirin NPU, through the hardware/software co-optimization. [Click here to learn more](https://github.com/PaddlePaddle/Paddle-Lite) - + - **Industry-Oriented Models and Libraries with Open Source Repositories** PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications. [Click here to learn more](https://github.com/PaddlePaddle/models) - + ## Documentation @@ -135,10 +135,10 @@ We provide [English](http://www.paddlepaddle.org.cn/documentation/docs/en/1.8/be - [User Guides](https://www.paddlepaddle.org.cn/documentation/docs/en/user_guides/index_en.html) You might have got the hang of Beginner’s Guide, and wish to model practical problems and build your original networks. - + - [Advanced User Guides](https://www.paddlepaddle.org.cn/documentation/docs/en/advanced_guide/index_en.html) - So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator. + So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator. - [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/en/api/index_en.html) diff --git a/python/paddle/fluid/contrib/slim/tests/README.md b/python/paddle/fluid/contrib/slim/tests/README.md index 8688c96b7bd4724540b843748d23455710918854..e052cdfea84be8f599f630af6da980b9e5f0cab7 100644 --- a/python/paddle/fluid/contrib/slim/tests/README.md +++ b/python/paddle/fluid/contrib/slim/tests/README.md @@ -10,11 +10,11 @@ In **Release 1.7**, a support for [Ernie (NLP) Quant trained model](https://gith In **Release 2.0**, further optimizations were added to the Quant2: INT8 `matmul` kernel, inplace execution of activation and `elementwise_add` operators, and broader support for quantization aware strategy from PaddleSlim. -In this document we focus on the Quant2 approach only. +In this document we focus on the Quant2 approach only. ## 0. Prerequisites * PaddlePaddle in version 2.0 or higher is required. For instructions on how to install it see the [installation document](https://www.paddlepaddle.org.cn/install/quick). - + * MKL-DNN and MKL are required. The highest performance gain can be observed using CPU servers supporting AVX512 instructions. * INT8 accuracy is best on CPU servers supporting AVX512 VNNI extension (e.g. CLX class Intel processors). A linux server supports AVX512 VNNI instructions if the output of the command `lscpu` contains the `avx512_vnni` entry in the `Flags` section. AVX512 VNNI support on Windows can be checked using the [`coreinfo`]( https://docs.microsoft.com/en-us/sysinternals/downloads/coreinfo) tool. @@ -54,7 +54,7 @@ Notes: and the quantization scales have to be collected for the `input1` and `outpu3` tensors in the Quant model. 2. Quantization of the following operators is supported: `conv2d`, `depthwise_conv2d`, `mul`, `fc`, `matmul`, `pool2d`, `reshape2`, `transpose2`, `concat`. 3. The longest sequence of consecutive quantizable operators in the model, the biggest performance boost can be achieved through quantization: - ```... → conv2d → conv2d → pool2d → conv2d → conv2d → ...``` + ```... → conv2d → conv2d → pool2d → conv2d → conv2d → ...``` Quantizing single operator separated from other quantizable operators can give no performance benefits or even slow down the inference: ```... → swish → fc → softmax → ...` @@ -94,8 +94,8 @@ The code snipped shows how the `Quant2Int8MkldnnPass` can be applied to a model import paddle.fluid as fluid from paddle.fluid.contrib.slim.quantization import Quant2Int8MkldnnPass from paddle.fluid.framework import IrGraph - from paddle.fluid import core - + from paddle.fluid import core + # Create the IrGraph by Program graph = IrGraph(core.Graph(fluid.Program().desc), for_test=False) place = fluid.CPUPlace() @@ -187,7 +187,7 @@ Notes: ## 6. How to reproduce the results -The steps below show, taking ResNet50 as an example, how to reproduce the above accuracy and performance results for Image Classification models. +The steps below show, taking ResNet50 as an example, how to reproduce the above accuracy and performance results for Image Classification models. To reproduce NLP models results (Ernie), please follow [How to reproduce Ernie Quant results on MKL-DNN](https://github.com/PaddlePaddle/benchmark/tree/master/Inference/c%2B%2B/ernie/mkldnn/README.md). ### Prepare dataset diff --git a/python/paddle/fluid/tests/unittests/collective/README.md b/python/paddle/fluid/tests/unittests/collective/README.md index 2370ce07e05b4a77c0822e3e986e094030cc04b2..790d207074f80ef6191b3cfbfe3249af745fd31b 100644 --- a/python/paddle/fluid/tests/unittests/collective/README.md +++ b/python/paddle/fluid/tests/unittests/collective/README.md @@ -9,9 +9,9 @@ * `os`: The supported operator system, ignoring case. If the test run in multiple operator systems, use ";" to split systems, for example, `apple;linux` means the test runs on both Apple and Linux. The supported values are `linux`,`win32` and `apple`. If the value is empty, this means the test runs on all opertaor systems. * `arch`: the device's architecture. similar to `os`, multiple valuse ars splited by ";" and ignoring case. The supported architectures are `gpu`, `xpu`, `ASCEND`, `ASCEND_CL` and `rocm`. * `timeout`: timeout of a unittest, whose unit is second. Blank means defalut. -* `run_type`: run_type of a unittest. Supported values are `NIGHTLY`, `EXCLUSIVE`, `CINN`, `DIST`, `GPUPS`, `INFER`, `EXCLUSIVE:NIGHTLY`, `DIST:NIGHTLY`,which are case-insensitive. +* `run_type`: run_type of a unittest. Supported values are `NIGHTLY`, `EXCLUSIVE`, `CINN`, `DIST`, `GPUPS`, `INFER`, `EXCLUSIVE:NIGHTLY`, `DIST:NIGHTLY`,which are case-insensitive. * `launcher`: the test launcher.Supported values are test_runner.py, dist_test.sh and custom scripts' name. Blank means test_runner.py. -* `num_port`: the number of port used in a distributed unit test. Blank means automatically distributed port. +* `num_port`: the number of port used in a distributed unit test. Blank means automatically distributed port. * `run_serial`: whether in serial mode. the value can be 1 or 0.Default (empty) is 0. Blank means defalut. * `ENVS`: required environments. multiple envirenmonts are splited by ";". * `conditions`: extra required conditions for some tests. The value is a list of boolean expression in cmake programmer, splited with ";". For example, the value can be `WITH_DGC;NOT WITH_NCCL` or `WITH_NCCL;${NCCL_VERSION} VERSION_GREATER_EQUAL 2212`,The relationship between these expressions is a conjunction. @@ -22,14 +22,14 @@ python3 ${PADDLE_ROOT}/tools/gen_ut_cmakelists.py -f ${PADDLE_ROOT}/python/paddle/fluid/tests/unittests/collective/testslist.csv ``` Then the cmd generates a file named CMakeLists.txt in the save directory with the testslist.csv. -* usgae: - The command accepts --files/-f or --dirpaths/-d options, both of which accepts multiple values. - Option -f accepts a list of testslist.csv. - Option -d accepts a list of directory path including files named testslist.csv. +* usgae: + The command accepts --files/-f or --dirpaths/-d options, both of which accepts multiple values. + Option -f accepts a list of testslist.csv. + Option -d accepts a list of directory path including files named testslist.csv. Type `python3 ${PADDLE_ROOT}/tools/gen_ut_cmakelists.py --help` for details. -* note: +* note: When commiting the codes, you should commit both the testslist.csv and the generated CMakeLists.txt. Once you pulled the repo, you don't need to run this command until you modify the testslists.csv file. - + ### step 4. Build and test Build paddle and run ctest for the new unit test diff --git a/r/README.md b/r/README.md index a1aab60ece448e23e537a5d0e45a7108ce0d6c5d..33f1807cd6afc979d888c967c110a88b9978e89f 100644 --- a/r/README.md +++ b/r/README.md @@ -55,7 +55,7 @@ Other configuration options and descriptions are as fallows. ``` r config$enable_profile() # turn on inference profile config$enable_use_gpu(gpu_memory_mb, gpu_id) # use GPU -config$disable_gpu() # disable GPU +config$disable_gpu() # disable GPU config$gpu_device_id() # get GPU id config$switch_ir_optim(TRUE) # turn on IR optimize(default is TRUE) config$enable_tensorrt_engine(workspace_size, diff --git a/tools/externalError/README.md b/tools/externalError/README.md index 0c2ac626991da209ec3c386e81aa881f6fb8fcb0..5b308cf04d1ab79a16edabfedc6ae6725431870f 100644 --- a/tools/externalError/README.md +++ b/tools/externalError/README.md @@ -2,17 +2,17 @@ -1. add new spider code in spider.py for crawling error message from website. +1. add new spider code in spider.py for crawling error message from website. 2. run `bash start.sh` in current directory to generate new externalErrorMsg_${date}.tar.gz file, for example `externalErrorMsg_20210928.tar.gz`. 3. upload above tar file into bos https://paddlepaddledeps.bj.bcebos.com **paddlepaddledeps** bucket, and copy download link `${download_url}`. ***\*Be careful not to delete original tar file\****. -4. compute md5 value of above tar file `${md5}`, and modify cmake/third_party.cmake file +4. compute md5 value of above tar file `${md5}`, and modify cmake/third_party.cmake file ``` set(URL "${download_url}" CACHE STRING "" FORCE) - file_download_and_uncompress(${URL} "externalError" MD5 ${md5}) + file_download_and_uncompress(${URL} "externalError" MD5 ${md5}) ``` for example: