未验证 提交 3404ff67 编写于 作者: N Nyakku Shigure 提交者: GitHub

[CodeStyle] trim trailing whitespace in .md and .rst (#45990)

* [CodeStyle] trim trailing whitespace in .md and .rst

* empty commit, test=document_fix
上级 1349584e
# Contribute Code
You are welcome to contribute to project PaddlePaddle. To contribute to PaddlePaddle, you have to agree with the
You are welcome to contribute to project PaddlePaddle. To contribute to PaddlePaddle, you have to agree with the
[PaddlePaddle Contributor License Agreement](https://gist.github.com/XiaoguangHu01/75018ad8e11af13df97070dd18ae6808).
We sincerely appreciate your contribution. This document explains our workflow and work style.
......
<p align="center">
<img align="center" src="doc/imgs/logo.png", width=1600>
<p>
--------------------------------------------------------------------------------
English | [简体中文](./README_cn.md)
......@@ -52,13 +52,13 @@ Now our developers can acquire Tesla V100 online computing resources for free. I
- **High-Performance Inference Engines for Comprehensive Deployment Environments**
PaddlePaddle is not only compatible with models trained in 3rd party open-source frameworks , but also offers complete inference products for various production scenarios. Our inference product line includes [Paddle Inference](https://paddle-inference.readthedocs.io/en/master/guides/introduction/index_intro.html): Native inference library for high-performance server and cloud inference; [Paddle Serving](https://github.com/PaddlePaddle/Serving): A service-oriented framework suitable for distributed and pipeline productions; [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite): Ultra-Lightweight inference engine for mobile and IoT environments; [Paddle.js](https://www.paddlepaddle.org.cn/paddle/paddlejs): A frontend inference engine for browser and mini-apps. Furthermore, by great amounts of optimization with leading hardware in each scenario, Paddle inference engines outperform most of the other mainstream frameworks.
- **Industry-Oriented Models and Libraries with Open Source Repositories**
PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications.
[Click here to learn more](https://github.com/PaddlePaddle/models)
## Documentation
......@@ -71,7 +71,7 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide
- [Practice](https://www.paddlepaddle.org.cn/documentation/docs/zh/tutorial/index_cn.html)
So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.
So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.
- [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/en/api/index_en.html)
......@@ -86,11 +86,11 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide
- [Github Issues](https://github.com/PaddlePaddle/Paddle/issues): bug reports, feature requests, install issues, usage issues, etc.
- QQ discussion group: 441226485 (PaddlePaddle).
- [Forums](https://aistudio.baidu.com/paddle/forum): discuss implementations, research, etc.
## Courses
- [Server Deployments](https://aistudio.baidu.com/aistudio/course/introduce/19084): Courses intorducing high performance server deployments via local and remote services.
- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses intorducing edge deployments from mobile, IoT to web and applets.
- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses intorducing edge deployments from mobile, IoT to web and applets.
## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).
......@@ -39,13 +39,13 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
- **开发便捷的产业级深度学习框架**
飞桨深度学习框架采用基于编程逻辑的组网范式,对于普通开发者而言更容易上手,符合他们的开发习惯。同时支持声明式和命令式编程,兼具开发的灵活性和高性能。网络结构自动设计,模型效果超越人类专家。
- **支持超大规模深度学习模型的训练**
飞桨突破了超大规模深度学习模型训练技术,实现了支持千亿特征、万亿参数、数百节点的开源大规模训练平台,攻克了超大规模深度学习模型的在线学习难题,实现了万亿规模参数模型的实时更新。
[查看详情](https://github.com/PaddlePaddle/Fleet)
- **支持多端多平台的高性能推理部署工具**
......@@ -66,14 +66,14 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
- [使用指南](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/index_cn.html)
或许您想从深度学习基础开始学习飞桨
- [应用实践](https://www.paddlepaddle.org.cn/documentation/docs/zh/tutorial/index_cn.html)
- [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/index_cn.html)
新的API支持代码更少更简洁的程序
- [贡献方式](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/08_contribution/index_cn.html)
......@@ -84,7 +84,7 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
- 欢迎您通过[Github Issues](https://github.com/PaddlePaddle/Paddle/issues)来提交问题、报告与建议
- QQ群: 441226485 (PaddlePaddle)
- [论坛](https://aistudio.baidu.com/paddle/forum): 欢迎大家在PaddlePaddle论坛分享在使用PaddlePaddle中遇到的问题和经验, 营造良好的论坛氛围
## 课程
- [服务器部署](https://aistudio.baidu.com/aistudio/course/introduce/19084): 详细介绍高性能服务器端部署实操,包含本地端及服务化Serving部署等
......
......@@ -48,7 +48,7 @@ We will indicate the bug fix in the release of PaddlePaddle, and publish the vul
### What is a vulnerability?
In the process of computation graphs in PaddlePaddle, models can perform arbitrary computations , including reading and writing files, communicating with the network, etc. It may cause memory exhaustion, deadlock, etc., which will lead to unexpected behavior of PaddlePaddle. We consider these behavior to be security vulnerabilities only if they are out of the intention of the operation involved.
In the process of computation graphs in PaddlePaddle, models can perform arbitrary computations , including reading and writing files, communicating with the network, etc. It may cause memory exhaustion, deadlock, etc., which will lead to unexpected behavior of PaddlePaddle. We consider these behavior to be security vulnerabilities only if they are out of the intention of the operation involved.
......
# For Readers and Developers
Thanks for reading PaddlePaddle documentation.
Thanks for reading PaddlePaddle documentation.
Since **September 17th, 2018**, the **0.15.0 and develop** documentation source has been moved to [FluidDoc Repo](https://github.com/PaddlePaddle/FluidDoc) and updated there.
......
# 目录说明
* PSServer
* PSServer
* PSClient
* PsService
* Communicator
......
# Inference Analysis
The `inference/analysis` module is used to analyze and optimize the inference program,
it references some philosophy from `LLVM/analysis`,
it references some philosophy from `LLVM/analysis`,
and make the various optimization features be pluggable and co-exist in a pipeline.
We borrowed some concepts from LLVM, such as
......@@ -31,14 +31,14 @@ each pass will generate unified debug information or visualization for better de
## Supported Passes
### `FluidToDataFlowGraphPass`
Transform the fluid `ProgramDesc` to a `DataFlowGraph` to give an abstract representation for all the middle passes,
Transform the fluid `ProgramDesc` to a `DataFlowGraph` to give an abstract representation for all the middle passes,
this should be the first pass of the pipeline.
### `DataFlowGraphToFluidPass`
Generate a final `ProgramDesc` from a data flow graph, this should be the last pass of the pipeline.
### `TensorRTSubgraphNodeMarkPass`
Mark the `Node` that are supported by TensorRT,
Mark the `Node` that are supported by TensorRT,
this pass will generate a visualization file which can be used for debugging.
### `TensorRTSubGraphPass`
......
......@@ -9,7 +9,7 @@ You can easily deploy a model trained by Paddle following the steps as below:
## The APIs
All the released APIs are located in the `paddle_inference_api.h` header file.
All the released APIs are located in the `paddle_inference_api.h` header file.
The stable APIs are wrapped by `namespace paddle`, the unstable APIs are protected by `namespace paddle::contrib`.
## Write some codes
......
......@@ -2,11 +2,11 @@
There are several demos:
- simple_on_word2vec:
- Follow the C++ codes is in `simple_on_word2vec.cc`.
- simple_on_word2vec:
- Follow the C++ codes is in `simple_on_word2vec.cc`.
- It is suitable for word2vec model.
- vis_demo:
- Follow the C++ codes is in `vis_demo.cc`.
- vis_demo:
- Follow the C++ codes is in `vis_demo.cc`.
- It is suitable for mobilenet, se_resnext50 and ocr three models.
- Input data format:
- Each line contains a single record
......@@ -15,7 +15,7 @@ There are several demos:
<space split floats as data>\t<space split ints as shape>
```
To build and execute the demos, simply run
To build and execute the demos, simply run
```
./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU
```
......
......@@ -6,7 +6,7 @@ The APIs are described in `paddle_inference_api.h`, just one header file, and tw
## PaddleTensor
We provide the `PaddleTensor` data structure to give a general tensor interface.
The definition is
The definition is
```c++
struct PaddleTensor {
......@@ -17,8 +17,8 @@ struct PaddleTensor {
};
```
The data is stored in a continuous memory `PaddleBuf,` and a `PaddleDType` specifies tensor's data type.
The `name` field is used to specify the name of an input variable,
The data is stored in a continuous memory `PaddleBuf,` and a `PaddleDType` specifies tensor's data type.
The `name` field is used to specify the name of an input variable,
that is important when there are multiple inputs and need to distinguish which variable to set.
## engine
......@@ -38,7 +38,7 @@ enum class PaddleEngineKind {
```
## PaddlePredictor and how to create one
The main interface is `PaddlePredictor,` there are following methods
The main interface is `PaddlePredictor,` there are following methods
- `bool Run(const std::vector<PaddleTensor>& inputs, std::vector<PaddleTensor>* output_data)`
- take inputs and output `output_data.`
......
......@@ -5,7 +5,7 @@
预测库包含:
- 头文件 `paddle_inference_api.h` 定义了所有的接口
- 库文件 `libpaddle_inference.so/.a(Linux/Mac)` `libpaddle_inference.lib/paddle_inference.dll(Windows)`
- 库文件 `libpaddle_inference.so/.a(Linux/Mac)` `libpaddle_inference.lib/paddle_inference.dll(Windows)`
下面是详细的一些 API 概念介绍
......
......@@ -126,7 +126,7 @@ MODEL_NAME=googlenet, mobilenetv1, mobilenetv2, resnet101, resnet50, vgg16, vgg1
* ## Prepare dataset
* Download and preprocess the full Pascal VOC2007 test set.
```bash
cd /PATH/TO/PADDLE
python paddle/fluid/inference/tests/api/full_pascalvoc_test_preprocess.py
......
......@@ -9,7 +9,7 @@ There are several model tests currently:
- test_resnet50_quant.cc
- test_yolov3.cc
To build and execute tests on Linux, simply run
To build and execute tests on Linux, simply run
```
./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU $DATA_DIR
```
......@@ -24,7 +24,7 @@ busybox bash ./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU $DATA_DIR
- `$TEST_GPU_CPU`: test both GPU/CPU mode or only CPU mode
- `$DATA_DIR`: download data path
now only support 4 kinds of tests which controled by `--gtest_filter` argument, test suite name should be same as following.
now only support 4 kinds of tests which controled by `--gtest_filter` argument, test suite name should be same as following.
- `TEST(gpu_tester_*, test_name)`
- `TEST(cpu_tester_*, test_name)`
- `TEST(mkldnn_tester_*, test_name)`
......
......@@ -4,7 +4,7 @@ JIT(Just In Time) Kernel contains actually generated code and some other impleme
Each implementation has its own condition to use, defined in `CanBeUsed`.
They are combined together to get the best performance of one single independent function.
They could be some very simple functions like vector multiply, or some complicated functions like LSTM.
And they can be composed with some other exited jit kernels to build up a complex function.
And they can be composed with some other exited jit kernels to build up a complex function.
Currently it's only supported on CPU yet.
## Contents
......@@ -38,14 +38,14 @@ All basical definations of jit kernels are addressed in `paddle/fluid/operators/
- `refer`: Each kernel must have one reference implementation on CPU, and it should only focus on the correctness and should not depends on any third-party libraries.
- `gen`: The code generated should be kept here. They should be designed focusing on the best performance, which depends on Xbyak.
- `more`: All other implementations should be kept in this folder with one directory corresponding to one library kind or method kind, such as mkl, mkldnn, openblas or intrinsic code. Each implementation should have it advantage.
- `more`: All other implementations should be kept in this folder with one directory corresponding to one library kind or method kind, such as mkl, mkldnn, openblas or intrinsic code. Each implementation should have it advantage.
## How to use
We present these methods to get the functions:
- `GetAllCandidateFuncs`. It can return all the implementations supported. All of the implementations can get the same result. You can do some runtime benchmark to choose which should actually be used.
- `GetDefaultBestFunc`. It only return one default function pointer, which is tuning offline with some genenal configures and attributes. This should cover most situations.
- `KernelFuncs::Cache()`. It can get the default functions and save it for next time with the same attribute.
- `KernelFuncs::Cache()`. It can get the default functions and save it for next time with the same attribute.
- `GetReferFunc`. It can only get the reference code in CPU, and all the others implementations have same logic with this reference code.
And here are some examples:
......@@ -86,7 +86,7 @@ All kernels are inlcuded in `paddle/fluid/operators/jit/kernels.h`, which is aut
1. Add `your_key` at `KernelType`.
2. Add your new `KernelTuple` which must include `your_key`. It should be a combination of the data type, attribute type and function type. You can refer `SeqPoolTuple`.
3. Add reference function of `your_key`.
3. Add reference function of `your_key`.
Note:
- this should be run on CPU and do not depend on any third-party.
- Add `USE_JITKERNEL_REFER(your_key)` in `refer/CmakeLists.txt` to make sure this code can be used.
......
......@@ -15,7 +15,7 @@ PaddlePaddle applications directly in docker or on Kubernetes clusters.
To achieve this, we maintain a dockerhub repo:https://hub.docker.com/r/paddlepaddle/paddle
which provides pre-built environment images to build PaddlePaddle and generate corresponding `whl`
binaries.(**We strongly recommend building paddlepaddle in our pre-specified Docker environment.**)
binaries.(**We strongly recommend building paddlepaddle in our pre-specified Docker environment.**)
## Development Workflow
......@@ -52,8 +52,8 @@ cd Paddle
After the build finishes, you can get output `whl` package under
`build/python/dist`.
This command will download the most recent dev image from docker hub, start a container in the backend and then run the build script `/paddle/paddle/scripts/paddle_build.sh build` in the container.
The container mounts the source directory on the host into `/paddle`.
This command will download the most recent dev image from docker hub, start a container in the backend and then run the build script `/paddle/paddle/scripts/paddle_build.sh build` in the container.
The container mounts the source directory on the host into `/paddle`.
When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed.
### Build Options
......
......@@ -7,7 +7,7 @@ Paddle can be built for linux-musl such as alpine, and be used in libos-liked SG
# Build Automatically
1. clone paddle source from github
```bash
git clone https://github.com/PaddlePaddle/Paddle.git
```
......@@ -50,7 +50,7 @@ mkdir -p build && cd build
ls ./output/*.whl
```
# Build Manually
# Build Manually
1. start up the building docker, and enter the shell in the container
```bash
......@@ -88,7 +88,7 @@ make -j8
# Scripts
1. **build_docker.sh**
compiling docker building script. it use alpine linux 3.10 as musl linux build enironment. it will try to install all the compiling tools, development packages, and python requirements for paddle musl compiling.
environment variables:
- PYTHON_VERSION: the version of python used for image building, default=3.7.
- WITH_PRUNE_DAYS: prune old docker images, with days limitation.
......
......@@ -48,7 +48,7 @@ We provide users with four installation methods ,which are pip, conda, docker an
- **pip or pip3 version 9.0.1+ (64 bit)**
#### <a id="Commands to install">Commands to install</a>
......@@ -115,12 +115,12 @@ If you want to install witch conda or docker or pip,please see commands to insta
PaddlePaddle is not only compatible with other open-source frameworks for models training, but also works well on the ubiquitous developments, varying from platforms to devices. More specifically, PaddlePaddle accelerates the inference procedure with the fastest speed-up. Note that, a recent breakthrough of inference speed has been made by PaddlePaddle on Huawei's Kirin NPU, through the hardware/software co-optimization.
[Click here to learn more](https://github.com/PaddlePaddle/Paddle-Lite)
- **Industry-Oriented Models and Libraries with Open Source Repositories**
PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications.
[Click here to learn more](https://github.com/PaddlePaddle/models)
## Documentation
......@@ -135,10 +135,10 @@ We provide [English](http://www.paddlepaddle.org.cn/documentation/docs/en/1.8/be
- [User Guides](https://www.paddlepaddle.org.cn/documentation/docs/en/user_guides/index_en.html)
You might have got the hang of Beginner’s Guide, and wish to model practical problems and build your original networks.
- [Advanced User Guides](https://www.paddlepaddle.org.cn/documentation/docs/en/advanced_guide/index_en.html)
So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.
So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.
- [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/en/api/index_en.html)
......
......@@ -10,11 +10,11 @@ In **Release 1.7**, a support for [Ernie (NLP) Quant trained model](https://gith
In **Release 2.0**, further optimizations were added to the Quant2: INT8 `matmul` kernel, inplace execution of activation and `elementwise_add` operators, and broader support for quantization aware strategy from PaddleSlim.
In this document we focus on the Quant2 approach only.
In this document we focus on the Quant2 approach only.
## 0. Prerequisites
* PaddlePaddle in version 2.0 or higher is required. For instructions on how to install it see the [installation document](https://www.paddlepaddle.org.cn/install/quick).
* MKL-DNN and MKL are required. The highest performance gain can be observed using CPU servers supporting AVX512 instructions.
* INT8 accuracy is best on CPU servers supporting AVX512 VNNI extension (e.g. CLX class Intel processors). A linux server supports AVX512 VNNI instructions if the output of the command `lscpu` contains the `avx512_vnni` entry in the `Flags` section. AVX512 VNNI support on Windows can be checked using the [`coreinfo`]( https://docs.microsoft.com/en-us/sysinternals/downloads/coreinfo) tool.
......@@ -54,7 +54,7 @@ Notes:
and the quantization scales have to be collected for the `input1` and `outpu3` tensors in the Quant model.
2. Quantization of the following operators is supported: `conv2d`, `depthwise_conv2d`, `mul`, `fc`, `matmul`, `pool2d`, `reshape2`, `transpose2`, `concat`.
3. The longest sequence of consecutive quantizable operators in the model, the biggest performance boost can be achieved through quantization:
```... → conv2d → conv2d → pool2d → conv2d → conv2d → ...```
```... → conv2d → conv2d → pool2d → conv2d → conv2d → ...```
Quantizing single operator separated from other quantizable operators can give no performance benefits or even slow down the inference:
```... → swish → fc → softmax → ...`
......@@ -94,8 +94,8 @@ The code snipped shows how the `Quant2Int8MkldnnPass` can be applied to a model
import paddle.fluid as fluid
from paddle.fluid.contrib.slim.quantization import Quant2Int8MkldnnPass
from paddle.fluid.framework import IrGraph
from paddle.fluid import core
from paddle.fluid import core
# Create the IrGraph by Program
graph = IrGraph(core.Graph(fluid.Program().desc), for_test=False)
place = fluid.CPUPlace()
......@@ -187,7 +187,7 @@ Notes:
## 6. How to reproduce the results
The steps below show, taking ResNet50 as an example, how to reproduce the above accuracy and performance results for Image Classification models.
The steps below show, taking ResNet50 as an example, how to reproduce the above accuracy and performance results for Image Classification models.
To reproduce NLP models results (Ernie), please follow [How to reproduce Ernie Quant results on MKL-DNN](https://github.com/PaddlePaddle/benchmark/tree/master/Inference/c%2B%2B/ernie/mkldnn/README.md).
### Prepare dataset
......
......@@ -9,9 +9,9 @@
* `os`: The supported operator system, ignoring case. If the test run in multiple operator systems, use ";" to split systems, for example, `apple;linux` means the test runs on both Apple and Linux. The supported values are `linux`,`win32` and `apple`. If the value is empty, this means the test runs on all opertaor systems.
* `arch`: the device's architecture. similar to `os`, multiple valuse ars splited by ";" and ignoring case. The supported architectures are `gpu`, `xpu`, `ASCEND`, `ASCEND_CL` and `rocm`.
* `timeout`: timeout of a unittest, whose unit is second. Blank means defalut.
* `run_type`: run_type of a unittest. Supported values are `NIGHTLY`, `EXCLUSIVE`, `CINN`, `DIST`, `GPUPS`, `INFER`, `EXCLUSIVE:NIGHTLY`, `DIST:NIGHTLY`,which are case-insensitive.
* `run_type`: run_type of a unittest. Supported values are `NIGHTLY`, `EXCLUSIVE`, `CINN`, `DIST`, `GPUPS`, `INFER`, `EXCLUSIVE:NIGHTLY`, `DIST:NIGHTLY`,which are case-insensitive.
* `launcher`: the test launcher.Supported values are test_runner.py, dist_test.sh and custom scripts' name. Blank means test_runner.py.
* `num_port`: the number of port used in a distributed unit test. Blank means automatically distributed port.
* `num_port`: the number of port used in a distributed unit test. Blank means automatically distributed port.
* `run_serial`: whether in serial mode. the value can be 1 or 0.Default (empty) is 0. Blank means defalut.
* `ENVS`: required environments. multiple envirenmonts are splited by ";".
* `conditions`: extra required conditions for some tests. The value is a list of boolean expression in cmake programmer, splited with ";". For example, the value can be `WITH_DGC;NOT WITH_NCCL` or `WITH_NCCL;${NCCL_VERSION} VERSION_GREATER_EQUAL 2212`,The relationship between these expressions is a conjunction.
......@@ -22,14 +22,14 @@
python3 ${PADDLE_ROOT}/tools/gen_ut_cmakelists.py -f ${PADDLE_ROOT}/python/paddle/fluid/tests/unittests/collective/testslist.csv
```
Then the cmd generates a file named CMakeLists.txt in the save directory with the testslist.csv.
* usgae:
The command accepts --files/-f or --dirpaths/-d options, both of which accepts multiple values.
Option -f accepts a list of testslist.csv.
Option -d accepts a list of directory path including files named testslist.csv.
* usgae:
The command accepts --files/-f or --dirpaths/-d options, both of which accepts multiple values.
Option -f accepts a list of testslist.csv.
Option -d accepts a list of directory path including files named testslist.csv.
Type `python3 ${PADDLE_ROOT}/tools/gen_ut_cmakelists.py --help` for details.
* note:
* note:
When commiting the codes, you should commit both the testslist.csv and the generated CMakeLists.txt. Once you pulled the repo, you don't need to run this command until you modify the testslists.csv file.
### step 4. Build and test
Build paddle and run ctest for the new unit test
......@@ -55,7 +55,7 @@ Other configuration options and descriptions are as fallows.
``` r
config$enable_profile() # turn on inference profile
config$enable_use_gpu(gpu_memory_mb, gpu_id) # use GPU
config$disable_gpu() # disable GPU
config$disable_gpu() # disable GPU
config$gpu_device_id() # get GPU id
config$switch_ir_optim(TRUE) # turn on IR optimize(default is TRUE)
config$enable_tensorrt_engine(workspace_size,
......
......@@ -2,17 +2,17 @@
1. add new spider code in spider.py for crawling error message from website.
1. add new spider code in spider.py for crawling error message from website.
2. run `bash start.sh` in current directory to generate new externalErrorMsg_${date}.tar.gz file, for example `externalErrorMsg_20210928.tar.gz`.
3. upload above tar file into bos https://paddlepaddledeps.bj.bcebos.com **paddlepaddledeps** bucket, and copy download link `${download_url}`. ***\*Be careful not to delete original tar file\****.
4. compute md5 value of above tar file `${md5}`, and modify cmake/third_party.cmake file
4. compute md5 value of above tar file `${md5}`, and modify cmake/third_party.cmake file
```
set(URL "${download_url}" CACHE STRING "" FORCE)
file_download_and_uncompress(${URL} "externalError" MD5 ${md5})
file_download_and_uncompress(${URL} "externalError" MD5 ${md5})
```
for example:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册