未验证 提交 8e68fa02 编写于 作者: C Cheerego 提交者: GitHub

对齐develop和v1.3 (#662)

* synchronize with develop (#642)

* update_commitid1.3 (#641)

* update inference c++ API doc (#634)

* update inference c++ API doc

* fix link

* thorough clean for doc (#644)

* thorough clean

* delete_DS_Store

* Cherrypick1.3 (#652)

* thorough clean

* delete_DS_Store

* [Don't merge now]update_install_doc (#643)

* update_install_doc

* follow_comments

* add maxdepth (#646)

* upload_md (#649)

* update_version (#650)

* Translation of 16 new apis (#651)

* fix_windows

* Final update 1.3 (#653)

* thorough clean

* delete_DS_Store

* update_1.3

* Deadlink fix (#654)

* fix_deadlinks

* update_docker

* Update release_note.rst

* Update index_cn.rst

* update_Paddle (#658)

* fix pic (#659)

* [to 1.3] cn api debug (#655) (#661)

* debug

* fix 2 -conv2d

* "锚" ==> anchor(s)
上级 2f6c97fe
......@@ -8,7 +8,7 @@
## Features
- 高性能支持ARM CPU
- 高性能支持ARM CPU
- 支持Mali GPU
- 支持Andreno GPU
- 支持苹果设备的GPU Metal实现
......@@ -55,7 +55,7 @@
### 2. Caffe转为Paddle Fluid模型
请参考这里[这里](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/caffe2fluid)
请参考这里[这里](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/caffe2fluid)
### 3. ONNX
......@@ -78,5 +78,5 @@ Paddle-Mobile 提供相对宽松的Apache-2.0开源协议 [Apache-2.0 license](L
## 旧版 Mobile-Deep-Learning
原MDL(Mobile-Deep-Learning)工程被迁移到了这里 [Mobile-Deep-Learning](https://github.com/allonli/mobile-deep-learning)
原MDL(Mobile-Deep-Learning)工程被迁移到了这里 [Mobile-Deep-Learning](https://github.com/allonli/mobile-deep-learning)
......@@ -8,7 +8,7 @@ Welcome to Paddle-Mobile GitHub project. Paddle-Mobile is a project of PaddlePad
## Features
- high performance in support of ARM CPU
- high performance in support of ARM CPU
- support Mali GPU
- support Andreno GPU
- support the realization of GPU Metal on Apple devices
......@@ -50,7 +50,7 @@ At present Paddle-Mobile only supports models trained by Paddle fluid. Models ca
### 1. Use Paddle Fluid directly to train
It is the most reliable method to be recommended
### 2. Transform Caffe to Paddle Fluid model
[https://github.com/PaddlePaddle/models/tree/develop/fluid/image_classification/caffe2fluid](https://github.com/PaddlePaddle/models/tree/develop/fluid/image_classification/caffe2fluid)
[https://github.com/PaddlePaddle/models/tree/develop/fluid/image_classification/caffe2fluid](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/caffe2fluid)
### 3. ONNX
ONNX is the acronym of Open Neural Network Exchange. The project is aimed to make a full communication and usage among different neural network development frameworks.
......@@ -76,4 +76,4 @@ Paddle-Mobile provides relatively unstrict Apache-2.0 Open source agreement [Apa
## Old version Mobile-Deep-Learning
Original MDL(Mobile-Deep-Learning) project has been transferred to [Mobile-Deep-Learning](https://github.com/allonli/mobile-deep-learning)
Original MDL(Mobile-Deep-Learning) project has been transferred to [Mobile-Deep-Learning](https://github.com/allonli/mobile-deep-learning)
......@@ -26,7 +26,7 @@
## 创建本地分支
Paddle 目前使用[Git流分支模型](http://nvie.com/posts/a-successful-git-branching-model/)进行开发,测试,发行和维护,具体请参考 [Paddle 分支规范](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/releasing_process.md#paddle-分支规范)
Paddle 目前使用[Git流分支模型](http://nvie.com/posts/a-successful-git-branching-model/)进行开发,测试,发行和维护,具体请参考 [Paddle 分支规范](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/design/others/releasing_process.md)
所有的 feature 和 bug fix 的开发工作都应该在一个新的分支上完成,一般从 `develop` 分支上创建新分支。
......@@ -110,7 +110,7 @@ no changes added to commit (use "git add" and/or "git commit -a")
➜ docker run -it -v $(pwd):/paddle paddle:latest-dev bash -c "cd /paddle/build && ctest"
```
关于构建和测试的更多信息,请参见[使用Docker安装运行](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/v2/build_and_install/docker_install_cn.rst)
关于构建和测试的更多信息,请参见[使用Docker安装运行](../../../beginners_guide/install/install_Docker.html)
## 提交(commit)
......
......@@ -3,7 +3,7 @@
You will learn how to develop programs in local environment under the guidelines of this document.
## Requirements of coding
- Please refer to the coding comment format of [Doxygen](http://www.stack.nl/~dimitri/doxygen/)
- Please refer to the coding comment format of [Doxygen](http://www.stack.nl/~dimitri/doxygen/)
- Make sure that option of builder `WITH_STYLE_CHECK` is on and the build could pass through the code style check.
- Unit test is needed for all codes.
- Pass through all unit tests.
......@@ -26,7 +26,7 @@ Clone remote git to local:
## Create local branch
At present [Git stream branch model](http://nvie.com/posts/a-successful-git-branching-model/) is applied to Paddle to undergo task of development,test,release and maintenance.Please refer to [branch regulation of Paddle](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/releasing_process.md#paddle-分支规范) about details。
At present [Git stream branch model](http://nvie.com/posts/a-successful-git-branching-model/) is applied to Paddle to undergo task of development,test,release and maintenance.Please refer to [branch regulation of Paddle](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/design/others/releasing_process.md) about details。
All development tasks of feature and bug fix should be finished in a new branch which is extended from `develop` branch.
......@@ -80,7 +80,7 @@ no changes added to commit (use "git add" and/or "git commit -a")
It needs a variety of development tools to build PaddlePaddle source code and generate documentation. For convenience, our standard development procedure is to put these tools together into a Docker image,called *development mirror* , usually named as `paddle:latest-dev` or `paddle:[version tag]-dev`,such as `paddle:0.11.0-dev` . Then all that need `cmake && make` ,such as IDE configuration,are replaced by `docker run paddle:latest-dev` .
You need to bulid this development mirror under the root directory of source code directory tree
You need to bulid this development mirror under the root directory of source code directory tree
```bash
➜ docker build -t paddle:latest-dev .
......@@ -110,7 +110,7 @@ Run all unit tests with following commands:
➜ docker run -it -v $(pwd):/paddle paddle:latest-dev bash -c "cd /paddle/build && ctest"
```
Please refer to [Installation and run with Docker](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/v2/build_and_install/docker_install_cn.rst) about more information of construction and test.
Please refer to [Installation and run with Docker](../../../beginners_guide/install/install_Docker.html) about more information of construction and test.
## Commit
......
......@@ -2,12 +2,9 @@
新增operator
#############
- `如何写新的operator <../../../advanced_usage/development/new_op.html>`_ :介绍如何在 Fluid 中添加新的 Operator
- `op相关的一些注意事项 <../../../advanced_usage/development/op_notes.html>`_ :介绍op相关的一些注意事项
- `op相关的一些注意事项 <../../../advanced_usage/development/new_op/op_notes.html>`_ :介绍op相关的一些注意事项
.. toctree::
:hidden:
new_op_cn.md
op_notes.md
......@@ -8,10 +8,10 @@ Op的核心方法是Run,Run方法需要两方面的资源:数据资源和计
Fluid框架的设计理念是可以在多种设备及第三方库上运行,有些Op的实现可能会因为设备或者第三方库的不同而不同。为此,Fluid引入了OpKernel的方式,即一个Op可以有多个OpKernel,这类Op继承自`OperatorWithKernel`,这类Op的代表是conv,conv_op的OpKerne有:`GemmConvKernel``CUDNNConvOpKernel``ConvMKLDNNOpKernel`,且每个OpKernel都有double和float两种数据类型。不需要OpKernel的代表有`WhileOp`等。
Operator继承关系图:
Operator继承关系图:
![op_inheritance_relation_diagram](../../pics/op_inheritance_relation_diagram.png)
进一步了解可参考:[multi_devices](https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/fluid/design/multi_devices)[scope](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/design/concepts/scope.md)[Developer's_Guide_to_Paddle_Fluid](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/getstarted/Developer's_Guide_to_Paddle_Fluid.md)
进一步了解可参考:[multi_devices](https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/fluid/design/multi_devices)[scope](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/design/concepts/scope.md)[Developer's_Guide_to_Paddle_Fluid](https://github.com/PaddlePaddle/FluidDoc/blob/release/1.2/doc/fluid/getstarted/Developer's_Guide_to_Paddle_Fluid.md)
### 2.Op的注册逻辑
每个Operator的注册项包括:
......@@ -75,7 +75,7 @@ Operator继承关系图:
通常Op注释时需要调用REGISTER_OPERATOR,即:
```
REGISTER_OPERATOR(op_type,
REGISTER_OPERATOR(op_type,
OperatorBase
op_maker_and_checker_maker,
op_grad_opmaker,
......@@ -83,7 +83,7 @@ Operator继承关系图:
op_infer_var_type)
```
**注意:**
**注意:**
1. 对于所有Op,前三个参数是必须的,op_type指明op的名字,OperatorBase是该Op的对象,op_maker_and_checker_maker是op的maker和op中attr的checker。
2. 如果该Op有反向,则必须要有op_grad_opmaker,因为在backward会根据正向的Op中获取反向Op的Maker。
......@@ -139,7 +139,7 @@ The following device operations are asynchronous with respect to the host:
- 如果数据传输是从GPU端到非页锁定的CPU端,数据传输将是同步,即使调用的是异步拷贝操作。
- 如果数据传输时从CPU端到CPU端,数据传输将是同步的,即使调用的是异步拷贝操作。
更多内容可参考:[Asynchronous Concurrent Execution](https://docs.nvidia.com/cuda/cuda-c-programming-guide/#asynchronous-concurrent-execution)[API synchronization behavior](https://docs.nvidia.com/cuda/cuda-runtime-api/api-sync-behavior.html#api-sync-behavior)
更多内容可参考:[Asynchronous Concurrent Execution](https://docs.nvidia.com/cuda/cuda-c-programming-guide/#asynchronous-concurrent-execution)[API synchronization behavior](https://docs.nvidia.com/cuda/cuda-runtime-api/api-sync-behavior.html#api-sync-behavior)
## Op性能优化
### 1.第三方库的选择
......
......@@ -11,7 +11,7 @@ The Fluid framework is designed to run on a variety of devices and third-party l
Operator inheritance diagram:
![op_inheritance_relation_diagram](../../pics/op_inheritance_relation_diagram.png)
For further information, please refer to: [multi_devices](https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/fluid/design/multi_devices) , [scope](https://github.com/PaddlePaddle/FluidDoc/Blob/develop/doc/fluid/design/concepts/scope.md) , [Developer's_Guide_to_Paddle_Fluid](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/getstarted/Developer's_Guide_to_Paddle_Fluid.md )
For further information, please refer to: [multi_devices](https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/fluid/design/multi_devices) , [scope](https://github.com/PaddlePaddle/FluidDoc/Blob/develop/doc/fluid/design/concepts/scope.md) , [Developer's_Guide_to_Paddle_Fluid](https://github.com/PaddlePaddle/FluidDoc/blob/release/1.2/doc/fluid/getstarted/Developer's_Guide_to_Paddle_Fluid.md)
### 2.Op's registration logic
The registration entries for each Operator include:
......@@ -67,7 +67,7 @@ The registration entries for each Operator include:
<tr>
<td>OpCreator </td>
<td>Functor </td>
<td>Create a new OperatorBase for each call </td>
<td>Create a new OperatorBase for each call </td>
<td>Call at runtime </td>
</tr>
</tbody>
......@@ -150,7 +150,7 @@ The calculation speed of Op is related to the amount of data input. For some Op,
Since the call of CUDA Kernel has a certain overhead, multiple calls of the CUDA Kernel in Op may affect the execution speed of Op. For example, the previous sequence_expand_op contains many CUDA Kernels. Usually, these CUDA Kernels process a small amount of data, so frequent calls to such Kernels will affect the calculation speed of Op. In this case, it is better to combine these small CUDA Kernels into one. This idea is used in the optimization of the sequence_expand_op procedure (related PR[#9289](https://github.com/PaddlePaddle/Paddle/pull/9289)). The optimized sequence_expand_op is about twice as fast as the previous implementation, the relevant experiments are introduced in the PR ([#9289](https://github.com/PaddlePaddle/Paddle/pull/9289)).
Reduce the number of copy and sync operations between the CPU and the GPU. For example, the fetch operation will update the model parameters and get a loss after each iteration, and the copy of the data from the GPU to the Non-Pinned-Memory CPU is synchronous, so frequent fetching for multiple parameters will reduce the model training speed.
Reduce the number of copy and sync operations between the CPU and the GPU. For example, the fetch operation will update the model parameters and get a loss after each iteration, and the copy of the data from the GPU to the Non-Pinned-Memory CPU is synchronous, so frequent fetching for multiple parameters will reduce the model training speed.
## Op numerical stability
### 1. Some Ops have numerical stability problems
......
......@@ -16,7 +16,7 @@ gperftool主要支持以下四个功能:
- heap-profiling using tcmalloc
- CPU profiler
Paddle也提供了基于gperftool的[CPU性能分析教程](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/howto/optimization/cpu_profiling_cn.md)
Paddle也提供了基于gperftool的[CPU性能分析教程](./cpu_profiling_cn.html)
对于堆内存的分析,主要用到thread-caching malloc和heap-profiling using tcmalloc。
......@@ -29,7 +29,7 @@ Paddle也提供了基于gperftool的[CPU性能分析教程](https://github.com/P
- 安装google-perftools
```
apt-get install libunwind-dev
apt-get install libunwind-dev
apt-get install google-perftools
```
......@@ -73,17 +73,17 @@ env HEAPPROFILE="./perf_log/test.log" HEAP_PROFILE_ALLOCATION_INTERVAL=209715200
pprof --pdf python test.log.0012.heap
```
上述命令会生成一个profile00x.pdf的文件,可以直接打开,例如:[memory_cpu_allocator](https://github.com/jacquesqiao/Paddle/blob/bd2ea0e1f84bb6522a66d44a072598153634cade/doc/fluid/howto/optimization/memory_cpu_allocator.pdf)。从下图可以看出,在CPU版本fluid的运行过程中,分配存储最多的模块式CPUAllocator. 而别的模块相对而言分配内存较少,所以被忽略了,这对于分配内存泄漏是很不方便的,因为泄漏是一个缓慢的过程,在这种图中是无法看到的。
![result](https://user-images.githubusercontent.com/3048612/40964027-a54033e4-68dc-11e8-836a-144910c4bb8c.png)
- Diff模式。可以对两个时刻的heap做diff,把一些内存分配没有发生变化的模块去掉,而把增量部分显示出来。
```
pprof --pdf --base test.log.0010.heap python test.log.1045.heap
```
生成的结果为:[`memory_leak_protobuf`](https://github.com/jacquesqiao/Paddle/blob/bd2ea0e1f84bb6522a66d44a072598153634cade/doc/fluid/howto/optimization/memory_leak_protobuf.pdf)
从图中可以看出:ProgramDesc这个结构,在两个版本之间增长了200MB+,所以这里有很大的内存泄漏的可能性,最终结果也确实证明是这里造成了泄漏。
![result](https://user-images.githubusercontent.com/3048612/40964057-b434d5e4-68dc-11e8-894b-8ab62bcf26c2.png)
![result](https://user-images.githubusercontent.com/3048612/40964063-b7dbee44-68dc-11e8-9719-da279f86477f.png)
......@@ -16,7 +16,7 @@ gperftool mainly supports four functions:
- heap-profiling using tcmalloc
- CPU profiler
Paddle also provides a [tutorial on CPU performance analysis](https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/fluid/howto/optimization/cpu_profiling_en.md) based on gperftool.
Paddle also provides a [tutorial on CPU performance analysis](./cpu_profiling_en.html) based on gperftool.
For the analysis for heap, we mainly use thread-caching malloc and heap-profiling using tcmalloc.
......@@ -29,7 +29,7 @@ This tutorial is based on the Docker development environment paddlepaddle/paddle
- Install google-perftools
```
apt-get install libunwind-dev
apt-get install libunwind-dev
apt-get install google-perftools
```
......@@ -74,15 +74,15 @@ As the program runs, a lot of files will be generated in the perf_log folder as
```
The command above will generate a file of profile00x.pdf, which can be opened directly, for example, [memory_cpu_allocator](https://github.com/jacquesqiao/Paddle/blob/bd2ea0e1f84bb6522a66d44a072598153634cade/doc/fluid/howto/optimization/memory_cpu_allocator.pdf). As demonstrated in the chart below, during the running of the CPU version fluid, the module CPUAllocator is allocated with most memory. Other modules are allocated with relatively less memory, so they are ignored. It is very inconvenient for inspecting memory leak for memory leak is a chronic process which cannot be inspected in this picture.
![result](https://user-images.githubusercontent.com/3048612/40964027-a54033e4-68dc-11e8-836a-144910c4bb8c.png)
- Diff mode. You can do diff on the heap at two moments, which removes some modules whose memory allocation has not changed, and displays the incremental part.
```
pprof --pdf --base test.log.0010.heap python test.log.1045.heap
```
The generated result: [`memory_leak_protobuf`](https://github.com/jacquesqiao/Paddle/blob/bd2ea0e1f84bb6522a66d44a072598153634cade/doc/fluid/howto/optimization/memory_leak_protobuf.pdf)
As shown from the figure: The structure of ProgramDesc has increased by 200MB+ between the two versions, so there is a large possibility that memory leak happens here, and the final result does prove a leak here.
![result](https://user-images.githubusercontent.com/3048612/40964057-b434d5e4-68dc-11e8-894b-8ab62bcf26c2.png)
![result](https://user-images.githubusercontent.com/3048612/40964063-b7dbee44-68dc-11e8-9719-da279f86477f.png)
......@@ -12,9 +12,7 @@
本模块介绍 Fluid 使用过程中的调优方法,包括:
- `如何进行基准测试 <benchmark.html>`_:介绍如何选择基准模型,从而验证模型的精度和性能
- `CPU性能调优 <cpu_profiling_cn.html>`_:介绍如何使用 cProfile 包、yep库、Google perftools 进行性能分析与调优
- `GPU性能调优 <gpu_profiling_cn.html>`_:介绍如何使用 Fluid 内置的定时工具、nvprof 或 nvvp 进行性能分析和调优
- `堆内存分析和优化 <host_memory_profiling_cn.html>`_:介绍如何使用 gperftool 进行堆内存分析和优化,以解决内存泄漏的问题
- `Timeline工具简介 <timeline_cn.html>`_ :介绍如何使用 Timeline 工具进行性能分析和调优
......
......@@ -27,15 +27,15 @@ python Paddle/tools/timeline.py --profile_path=/tmp/profile --timeline_path=time
1. 打开chrome浏览器,访问<chrome://tracing/>,用`load`按钮来加载生成的`timeline`文件。
![chrome tracing](./tracing.jpeg)
![chrome tracing](../tracing.jpeg)
1. 结果如下图所示,可以放到来查看timetime的细节信息。
![chrome timeline](./timeline.jpeg)
## 分布式使用
一般来说,分布式的训练程序都会有两种程序:pserver和trainer。我们提供了把pserver和trainer的profile日志用timeline来显示的方式。
一般来说,分布式的训练程序都会有两种程序:pserver和trainer。我们提供了把pserver和trainer的profile日志用timeline来显示的方式。
1. trainer打开方式与[本地使用](#local)部分的第1步相同
1. pserver可以通过加两个环境变量打开profile,例如:
......@@ -43,7 +43,7 @@ python Paddle/tools/timeline.py --profile_path=/tmp/profile --timeline_path=time
FLAGS_rpc_server_profile_period=10 FLAGS_rpc_server_profile_path=./tmp/pserver python train.py
```
3. 把pserver和trainer的profile文件生成一个timeline文件,例如:
3. 把pserver和trainer的profile文件生成一个timeline文件,例如:
```
python /paddle/tools/timeline.py
--profile_path trainer0=local_profile_10_pass0_0,trainer1=local_profile_10_pass0_1,pserver0=./pserver_0,pserver1=./pserver_1
......
......@@ -172,7 +172,7 @@ pip install -r requirements.txt
## 提交修改
如果您希望修改代码,请在`Paddle`仓库下参考[如何贡献代码](../development/contribute_to_paddle.html)执行操作。
如果您希望修改代码,请在`Paddle`仓库下参考[如何贡献代码](../development/contribute_to_paddle/index_cn.html)执行操作。
如果您仅修改文档:
......@@ -193,7 +193,7 @@ pip install -r requirements.txt
3.在`FluidDoc`仓库下为您的修改提交PR
提交修改与PR的步骤可以参考[如何贡献代码](../development/contribute_to_paddle.html)
提交修改与PR的步骤可以参考[如何贡献代码](../development/contribute_to_paddle/index_cn.html)
## 帮助改进预览工具
......
......@@ -114,7 +114,7 @@ After completing the installation, you will also need to do:
In the docker container that compiles the code, or the corresponding location in the host machine:
- Run the script `paddle/scripts/paddle_build.sh` (under Paddle repo)
```bash
# Compile paddle's python library
cd Paddle
......@@ -129,7 +129,7 @@ After completing the installation, you will also need to do:
docker run -it -v /Users/xxxx/workspace/paddlepaddle_workplace:/workplace -p 8000:8000 [images_id] /bin/bash
```
> Where `/Users/xxxx/workspace/paddlepaddle_workplace` should be replaced with your local host paddle workspace directory, `/workplace` should be replaced with the working directory in the docker. This mapping will ensure that we compile the python library, modify FluidDoc and use the preview tool at the same time.
> [images_id] is the image id of the paddlepaddle you use in docker.
......@@ -158,7 +158,7 @@ After completing the installation, you will also need to do:
```
---
**Preview modification**
......@@ -166,26 +166,26 @@ After completing the installation, you will also need to do:
Open your browser and navigate to http://localhost:8000 .
On the page to be updated, click Refresh Content at the top right corner.
After entering documentation page, the API section does not contain content. To preview the API document, please click on the API directory and you will see the generated API reference after a few minutes.
## Submit changes
If you wish to modify the code, please refer to [How to contribute code](contribute_to_paddle/index_en.html) under the `Paddle` repository.
If you wish to modify the code, please refer to [How to contribute code](../development/contribute_to_paddle/index_en.html) under the `Paddle` repository.
If you just modify the document:
- The modified content is in the `doc` folder, you only need to submit `PR` in the `FluidDoc` repository.
- The modified content is in the `external` folder:
1. Submit the PR in the repostory you modified. This is because the `FluidDoc` repository is just a wrapper that brings together the links of other repositories (namely, the "submodules" in git context).
2. When your changes are approved, update the corresponding `submodule` in FluidDoc to the latest commit-id of the source repository.
> For example, you updated the document on the develop branch in the book repository:
> - Go to the `FluidDoc/external/book` directory
> - Update commit-id to the latest commit: `git pull origin develop`
......@@ -193,7 +193,7 @@ If you just modify the document:
3. Pull Request for your changes in the `FluidDoc` repository
The steps to submit changes and PR can refer to [How to contribute code](contribute_to_paddle/index_en.html)
The steps to submit changes and PR can refer to [How to contribute code](../development/contribute_to_paddle/index_en.html)
## Help improve preview tool
......
......@@ -6,13 +6,13 @@
如果您非常熟悉 Fluid,期望获得更高效的模型或者定义自己的Operator,请阅读:
- `Fluid 设计思想 <../advanced_usage/design_idea/fluid_design_idea.html>`_:介绍 Fluid 底层的设计思想,帮助您更好的理解框架运作过程
- `Fluid 设计思想 <../advanced_usage/design_idea/fluid_design_idea.html>`_:介绍 Fluid 底层的设计思想,帮助您更好的理解框架运作过程
- `预测部署 <../advanced_usage/deploy/index_cn.html>`_ :介绍如何应用训练好的模型进行预测
- `新增operator <../advanced_usage/development/new_op/index_cn.html>`_ :介绍新增operator的方法及注意事项
- `性能调优 <../advanced_usage/development/profiling/index.html>`_ :介绍 Fluid 使用过程中的调优方法
- `性能调优 <../advanced_usage/development/profiling/index_cn.html>`_ :介绍 Fluid 使用过程中的调优方法
非常欢迎您为我们的开源社区做出贡献,关于如何贡献您的代码或文档,请阅读:
......
......@@ -61,7 +61,7 @@
- 可以并行编译吗?
> 是的。我们的 Docker image 运行一个 [Bash 脚本](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/paddle/scripts/paddle_build.sh)。这个脚本调用`make -j$(nproc)` 来启动和 CPU 核一样多的进程来并行编译。
> 是的。我们的 Docker image 运行一个 [Bash 脚本](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/scripts/paddle_build.sh)。这个脚本调用`make -j$(nproc)` 来启动和 CPU 核一样多的进程来并行编译。
- 在 Windows/MacOS 上编译很慢?
......
......@@ -67,20 +67,20 @@
- How hard is it to learn Docker?
> It's not difficult to understand Docker. It takes about ten minutes to read this [article](https://zhuanlan.zhihu.com/p/19902938).
> It's not difficult to understand Docker. It takes about ten minutes to read this [article](https://zhuanlan.zhihu.com/p/19902938).
This can save you an hour of installing and configuring various development tools, as well as the need for new installations when switching machines. Don't forget that PaddlePaddle updates may lead to the need for new development tools. Not to mention the benefits of simplifying the recurrence of problems.
- Can I use an IDE?
> Of course, because the source code is on the machine. By default, the IDE calls a program like make to compile the source code. We only need to configure the IDE to call the Docker command to compile the source code.
Many PaddlePaddle developers use Emacs. They add two lines to their `~/.emacs` configuration file.
> Of course, because the source code is on the machine. By default, the IDE calls a program like make to compile the source code. We only need to configure the IDE to call the Docker command to compile the source code.
Many PaddlePaddle developers use Emacs. They add two lines to their `~/.emacs` configuration file.
`global-set-key "\C-cc" 'compile`
`setq compile-command "docker run --rm -it -v $(git rev-parse --show- Toplevel): /paddle paddle:dev"`
`setq compile-command "docker run --rm -it -v $(git rev-parse --show- Toplevel): /paddle paddle:dev"`
You can start the compilation by pressing `Ctrl-C` and` c`.
- Can I compile in parallel?
> Yes. Our Docker image runs a [Bash script](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/paddle/scripts/paddle_build.sh). This script calls `make -j$(nproc)` to start as many processes as the CPU cores to compile in parallel.
> Yes. Our Docker image runs a [Bash script](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/scripts/paddle_build.sh). This script calls `make -j$(nproc)` to start as many processes as the CPU cores to compile in parallel.
- Docker needs sudo?
......
......@@ -36,7 +36,7 @@
<tr>
<td> SWIG </td>
<td> 最低 2.0 </td>
<td> </td>
<td> </td>https://github.com/PaddlePaddle/Paddle/tree/develop/doc/design/mkldnn
<td> <code>apt install swig </code><code> yum install swig </code> </td>
</tr>
<tr>
......@@ -211,7 +211,7 @@
**BLAS**
PaddlePaddle支持 [MKL](https://software.intel.com/en-us/mkl)[OpenBlAS](http://www.openblas.net) 两种BLAS库。默认使用MKL。如果使用MKL并且机器含有AVX2指令集,还会下载MKL-DNN数学库,详细参考[这里](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/design/mkldnn#cmake)
PaddlePaddle支持 [MKL](https://software.intel.com/en-us/mkl)[OpenBlAS](http://www.openblas.net) 两种BLAS库。默认使用MKL。如果使用MKL并且机器含有AVX2指令集,还会下载MKL-DNN数学库,详细参考[这里](https://github.com/PaddlePaddle/Paddle/tree/release/0.11.0/doc/design/mkldnn#cmake)
如果关闭MKL,则会使用OpenBLAS作为BLAS库。
......@@ -531,7 +531,7 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。
***
假设您已经在当前目录(比如在/home/work)编写了一个PaddlePaddle的程序: `train.py` (可以参考
[PaddlePaddleBook](http://www.paddlepaddle.org/docs/develop/book/01.fit_a_line/index.cn.html)
[PaddlePaddleBook](https://github.com/PaddlePaddle/book/blob/develop/01.fit_a_line/README.cn.md)
编写),就可以使用下面的命令开始执行训练:
cd /home/work
......@@ -597,7 +597,7 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note
**关于AVX:**
AVX是一种CPU指令集,可以加速PaddlePaddle的计算。最新的PaddlePaddle Docker镜像默认
是开启AVX编译的,所以,如果您的电脑不支持AVX,需要单独[编译](/build_from_source_cn.html) PaddlePaddle为no-avx版本。
是开启AVX编译的,所以,如果您的电脑不支持AVX,需要单独编译PaddlePaddle为no-avx版本。
以下指令能检查Linux电脑是否支持AVX:
......
......@@ -209,7 +209,7 @@
**BLAS**
PaddlePaddle supports two BLAS libraries, [MKL](https://software.intel.com/en-us/mkl) and [OpenBlAS](http://www.openblas.net/). MKL is used by default. If you use MKL and the machine contains the AVX2 instruction set, you will also download the MKL-DNN math library, for details please refer to [here](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/design/mkldnn#cmake).
PaddlePaddle supports two BLAS libraries, [MKL](https://software.intel.com/en-us/mkl) and [OpenBlAS](http://www.openblas.net/). MKL is used by default. If you use MKL and the machine contains the AVX2 instruction set, you will also download the MKL-DNN math library, for details please refer to [here](https://github.com/PaddlePaddle/Paddle/tree/release/0.11.0/doc/design/mkldnn#cmake).
If you close MKL, OpenBLAS will be used as the BLAS library.
......@@ -565,7 +565,7 @@ You can find the docker image for each release of PaddlePaddle in the [DockerHub
***
Suppose you have written a PaddlePaddle program in the current directory (such as /home/work): `train.py` ( refer to [PaddlePaddleBook](http://www.paddlepaddle.org/docs/develop/book/01.fit_a_line/index.cn.html) to write), you can start the training with the following command:
Suppose you have written a PaddlePaddle program in the current directory (such as /home/work): `train.py` ( refer to [PaddlePaddleBook](https://github.com/PaddlePaddle/book/blob/develop/01.fit_a_line/README.cn.md) to write), you can start the training with the following command:
cd /home/work
......@@ -629,7 +629,7 @@ In order to ensure that the GPU driver works properly in the image, we recommend
**About AVX:**
AVX is a set of CPU instructions that speeds up the calculation of PaddlePaddle. The latest PaddlePaddle Docker image is enabled by default for AVX compilation, so if your computer does not support AVX, you need to [compile](/build_from_source_cn.html) PaddlePaddle to no-avx version separately.
AVX is a set of CPU instructions that speeds up the calculation of PaddlePaddle. The latest PaddlePaddle Docker image is enabled by default for AVX compilation, so if your computer does not support AVX, you need to compile PaddlePaddle to no-avx version separately.
The following instructions can check if the Linux computer supports AVX:
......
......@@ -81,9 +81,9 @@
请您进入Docker容器后,执行如下命令
* ***CPU版本的PaddlePaddle***: `pip uninstall paddlepaddle``pip3 uninstall paddlepaddle`
* ***CPU版本的PaddlePaddle***: `pip uninstall paddlepaddle`
* ***GPU版本的PaddlePaddle***: `pip uninstall paddlepaddle-gpu``pip3 uninstall paddlepaddle-gpu`
* ***GPU版本的PaddlePaddle***: `pip uninstall paddlepaddle-gpu`
或通过`docker rm [Name of container]`来直接删除Docker容器
......@@ -11,13 +11,13 @@
在进行模型搭建之前,首先需要明确几个Fluid核心使用概念:
## 使用Tensor表示数据
## 使用Tensor表示数据
Fluid和其他主流框架一样,使用Tensor数据结构来承载数据。
在神经网络中传递的数据都是Tensor,Tensor可以简单理解成一个多维数组,一般而言可以有任意多的维度。不同的Tensor可以具有自己的数据类型和形状,同一Tensor中每个元素的数据类型是一样的,Tensor的形状就是Tensor的维度。
下图直观地表示1~6维的Tensor:
下图直观地表示1~6维的Tensor:
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/beginners_guide/image/tensor.jpg" width="400">
</p>
......@@ -96,7 +96,7 @@ type {
}
persistable: false
```
具体输出数值将在Executor运行时得到,详细过程会在后文展开描述。
## 数据传入
......@@ -122,7 +122,7 @@ Fluid有特定的数据传入方式:
#定义网络
import paddle.fluid as fluid
a = fluid.layers.data(name="a",shape=[1],dtype='float32')
b = fluid.layers.data(name="b",shape=[1],dtype='float32')
b = fluid.layers.data(name="b",shape=[1],dtype='float32')
result = fluid.layers.elementwise_add(a,b)
......@@ -136,7 +136,7 @@ import numpy
data_1 = input("a=")
data_2 = input("b=")
x = numpy.array([[data_1]])
y = numpy.array([[data_2]])
y = numpy.array([[data_2]])
#执行计算
outs = exe.run(
......@@ -174,12 +174,12 @@ print outs
```
[array([[7]]), array([[3]]), array([[10]])]
```
## 使用Program描述神经网络模型
Fluid不同于其他大部分深度学习框架,去掉了静态计算图的概念,代之以Program的形式动态描述计算过程。这种动态的计算描述方式,兼具网络结构修改的灵活性和模型搭建的便捷性,在保证性能的同时极大地提高了框架对模型的表达能力。
开发者的所有 Operator 都将写入 Program ,在Fluid内部将自动转化为一种叫作 ProgramDesc 的描述语言,Program 的定义过程就像在写一段通用程序,有开发经验的用户在使用 Fluid 时,会很自然的将自己的知识迁移过来。
开发者的所有 Operator 都将写入 Program ,在Fluid内部将自动转化为一种叫作 ProgramDesc 的描述语言,Program 的定义过程就像在写一段通用程序,有开发经验的用户在使用 Fluid 时,会很自然的将自己的知识迁移过来。
其中,Fluid通过提供顺序、分支和循环三种执行结构的支持,让用户可以通过组合描述任意复杂的模型。
......@@ -218,12 +218,10 @@ with fluid.layers.control_flow.Switch() as switch:
fluid.layers.tensor.assign(input=two_var, output=lr)
```
关于 Fluid 中 Program 的详细设计思想,可以参考阅读[Fluid设计思想](../../advanced_usage/design_idea/fluid_design_idea.html)
关于 Fluid 中 Program 的详细设计思想,可以参考阅读[Fluid设计思想](../../advanced_usage/design_idea/fluid_design_idea.html)
更多 Fluid 中的控制流,可以参考阅读[API文档](../../api_cn/layers_cn.html#control-flow)
## 使用Executor执行Program
Fluid的设计思想类似于高级编程语言C++和JAVA等。程序的执行过程被分为编译和执行两个阶段。
......@@ -261,27 +259,27 @@ outs = exe.run(
2. 定义数据
假设输入数据X=[1 2 3 4],Y=[2,4,6,8],在网络中定义:
```python
#定义X数值
train_data=numpy.array([[1.0],[2.0],[3.0],[4.0]]).astype('float32')
#定义期望预测的真实值y_true
y_true = numpy.array([[2.0],[4.0],[6.0],[8.0]]).astype('float32')
```
3. 搭建网络(定义前向计算逻辑)
接下来需要定义预测值与输入的关系,本次使用一个简单的线性回归函数进行预测:
```python
#定义输入数据类型
x = fluid.layers.data(name="x",shape=[1],dtype='float32')
#搭建全连接网络
y_predict = fluid.layers.fc(input=x,size=1,act=None)
```
这样的网络就可以进行预测了,虽然输出结果只是一组随机数,离预期结果仍相差甚远:
```python
#加载库
import paddle.fluid as fluid
......@@ -303,9 +301,9 @@ outs = exe.run(
#观察结果
print outs
```
输出结果:
```
[array([[0.74079144],
[1.4815829 ],
......@@ -313,8 +311,8 @@ outs = exe.run(
[2.9631658 ]], dtype=float32)]
```
4. 添加损失函数
4. 添加损失函数
完成模型搭建后,如何评估预测结果的好坏呢?我们通常在设计的网络中添加损失函数,以计算真实值与预测值的差。
在本例中,损失函数采用[均方差函数](https://en.wikipedia.org/wiki/Mean_squared_error):
......@@ -323,7 +321,7 @@ outs = exe.run(
avg_cost = fluid.layers.mean(cost)
```
输出一轮计算后的预测值和损失函数:
```python
#加载库
import paddle.fluid as fluid
......@@ -350,7 +348,7 @@ outs = exe.run(
print outs
```
输出结果:
```
[array([[0.9010564],
[1.8021128],
......@@ -359,17 +357,17 @@ outs = exe.run(
```
可以看到第一轮计算后的损失函数为9.0,仍有很大的下降空间。
5. 网络优化
确定损失函数后,可以通过前向计算得到损失值,然后通过链式求导法则得到参数的梯度值。
获取梯度值后需要更新参数,最简单的算法是随机梯度下降法:w=w−η⋅g,由`fluid.optimizer.SGD`实现:
```python
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.01)
```
让我们的网络训练100次,查看结果:
```python
#加载库
import paddle.fluid as fluid
......@@ -408,7 +406,7 @@ outs = exe.run(
[7.8866425]], dtype=float32), array([0.01651453], dtype=float32)]
```
可以看到100次迭代后,预测值已经非常接近真实值了,损失值也从初始值9.05下降到了0.01。
恭喜您!已经成功完成了第一个简单网络的搭建,想尝试线性回归的进阶版——房价预测模型,请阅读:[线性回归](../../beginners_guide/quick_start/fit_a_line/README.cn.html)。更多丰富的模型实例可以在[模型库](../../user_guides/models/index_cn.html)中找到。
<a name="what_next"></a>
......
......@@ -11,13 +11,13 @@ This document will instruct you to program and create a simple nueral network wi
Before building model, you need to figure out several core concepts of Fluid at first:
## Express data with Tensor
## Express data with Tensor
Like other mainstream frameworks, Fluid uses Tensor to hold data.
All data transferred in neural network are Tensor which can simply be regarded as a multi-dimensional array. In general, the number of dimensions can be any. Tensor features its own data type and shape. Data type of each element in single Tensor is the same. And **the shape of Tensor** refers to the dimensions of Tensor.
Picture below visually shows Tensor with dimension from one to six:
Picture below visually shows Tensor with dimension from one to six:
<p align="center">
<img src="https://raw.githubusercontent.com/PaddlePaddle/FluidDoc/develop/doc/fluid/beginners_guide/image/tensor.jpg" width="400">
</p>
......@@ -46,21 +46,21 @@ y = fluid.layers.fc(input=x, size=128, bias_attr=True)
**2. Input and Output Tensor**
The input data of the whole neural network is also a special Tensor in which the sizes of some dimensions can not be decided at the definition time of models. Such dimensions usually includes batch size, or width and height of image when such data formats in a mini-batch are not constant. Placeholders for these uncertain dimension are necessary at the definition phase of model.
The input data of the whole neural network is also a special Tensor in which the sizes of some dimensions can not be decided at the definition time of models. Such dimensions usually includes batch size, or width and height of image when such data formats in a mini-batch are not constant. Placeholders for these uncertain dimension are necessary at the definition phase of model.
`fluid.layers.data` is used to receive input data in Fluid, and it needs to be provided with the shape of input Tensor. When the shape is not certain, the correspondent dimension is defined as None.
`fluid.layers.data` is used to receive input data in Fluid, and it needs to be provided with the shape of input Tensor. When the shape is not certain, the correspondent dimension is defined as None.
The code below exemplifies the usage of `fluid.layers.data` :
```python
import paddle.fluid as fluid
#Define the dimension of x : [3,None]. What we could make sure is that the first dimension of x is 3.
#Define the dimension of x : [3,None]. What we could make sure is that the first dimension of x is 3.
#The second dimension is unknown and can only be known at runtime.
x = fluid.layers.data(name="x", shape=[3,None], dtype="int64")
#batch size doesn't have to be defined explicitly.
#batch size doesn't have to be defined explicitly.
#Fluid will automatically assign zeroth dimension as batch size dimension and fill right number at runtime.
a = fluid.layers.data(name="a",shape=[3,4],dtype='int64')
......@@ -126,7 +126,7 @@ For example, you can use `paddle.fluid.layers.elementwise_add()` to add up two i
#Define network
import paddle.fluid as fluid
a = fluid.layers.data(name="a",shape=[1],dtype='float32')
b = fluid.layers.data(name="b",shape=[1],dtype='float32')
b = fluid.layers.data(name="b",shape=[1],dtype='float32')
result = fluid.layers.elementwise_add(a,b)
......@@ -140,7 +140,7 @@ import numpy
data_1 = input("a=")
data_2 = input("b=")
x = numpy.array([[data_1]])
y = numpy.array([[data_2]])
y = numpy.array([[data_2]])
#Run computing
outs = exe.run(
......@@ -178,7 +178,7 @@ Output:
```
[array([[7]]), array([[3]]), array([[10]])]
```
## Use Program to describe neural network model
Fluid is different from most other deep learning frameworks. In Fluid, static computing map is replaced by Program to dynamically describe the network. This dynamic method delivers both flexible modifications to network structure and convenience to build model. Moreover, the capability of expressing a model is enhanced significantly while the performance is guaranteed.
......@@ -222,10 +222,10 @@ with fluid.layers.control_flow.Switch() as switch:
fluid.layers.tensor.assign(input=two_var, output=lr)
```
For detailed design principles of Program, please refer to [Design principle of Fluid](../../user_guides/design_idea/fluid_design_idea_en.html).
For more about control flow in Fluid, please refer to [Control Flow](../../api_guides/low_level/layers/control_flow_en.html).
For detailed design principles of Program, please refer to [Design principle of Fluid](../../advanced_usage/design_idea/fluid_design_idea_en.html).
For more about control flow in Fluid, please refer to [Control Flow](../../api/layers.html#control-flow).
## Use Executor to run Program
......@@ -265,27 +265,27 @@ Firstly, define input data format, model structure,loss function and optimized a
2. Define data
Supposing input data X=[1 2 3 4],Y=[2,4,6,8], make a definition in network:
```python
#define X
train_data=numpy.array([[1.0],[2.0],[3.0],[4.0]]).astype('float32')
#define ground-truth y_true expected to get from the model prediction
y_true = numpy.array([[2.0],[4.0],[6.0],[8.0]]).astype('float32')
```
3. Create network (define forward computing logic)
Next you need to define the relationship between the predicted value and the input. Take a simple linear regression function for example:
```python
#define input data type
x = fluid.layers.data(name="x",shape=[1],dtype='float32')
#create fully connected network
y_predict = fluid.layers.fc(input=x,size=1,act=None)
```
Now the network can predict output. Although the output is just a group of random numbers, which is far from expected results:
```python
#load library
import paddle.fluid as fluid
......@@ -307,9 +307,9 @@ Firstly, define input data format, model structure,loss function and optimized a
#observe result
print outs
```
Output:
```
[array([[0.74079144],
[1.4815829 ],
......@@ -317,8 +317,8 @@ Firstly, define input data format, model structure,loss function and optimized a
[2.9631658 ]], dtype=float32)]
```
4. Add loss function
4. Add loss function
After the construction of model, we need to evaluate the output result in order to make accurate predictions. How do we evaluate the result of prediction? We usually add loss function to network to compute the *distance* between ground-truth value and predict value.
In this example, we adopt [mean-square function](https://en.wikipedia.org/wiki/Mean_squared_error) as our loss function :
......@@ -328,7 +328,7 @@ Firstly, define input data format, model structure,loss function and optimized a
avg_cost = fluid.layers.mean(cost)
```
Output predicted value and loss function after a process of computing:
```python
#load library
import paddle.fluid as fluid
......@@ -355,7 +355,7 @@ Firstly, define input data format, model structure,loss function and optimized a
print outs
```
Output:
```
[array([[0.9010564],
[1.8021128],
......@@ -364,17 +364,17 @@ Firstly, define input data format, model structure,loss function and optimized a
```
We discover that the loss function after the first iteration of computing is 9.0, which shows there is a great improve space.
5. Optimization of network
After the definition of loss function,you can get loss value by forward computing and then get gradients of parameters with chain derivative method.
Parameters should be updated after you have obtained gradients. The simplest algorithm is random gradient algorithm: w=w−η⋅g,which is implemented by `fluid.optimizer.SGD`:
```python
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.01)
```
Let's train the network for 100 times and check the results:
```python
#load library
import paddle.fluid as fluid
......@@ -413,7 +413,7 @@ Firstly, define input data format, model structure,loss function and optimized a
[7.8866425]], dtype=float32), array([0.01651453], dtype=float32)]
```
Now we discover that predicted value is nearly close to real value and the loss value descends from original value 9.05 to 0.01 after iteration for 100 times.
Congratulations! You have succeed to create a simple network. If you want to try advanced linear regression —— predict model of housing price, please read [linear regression](../../beginners_guide/quick_start/fit_a_line/README.en.html). More examples of model can be found in [models](../../user_guides/models/index_en.html).
<a name="what_next"></a>
......
......@@ -32,7 +32,7 @@ Paddle Fluid v1.3
* 新增QuantizationTransformPass,此为Quantization Aware Training量化模式训练前的图修改操作部分。
* 内存和显存方面的优化
* 新增支持在编译时加入 Jemalloc 作为动态链接库,提升内存管理的性能,降低基础框架内存管理开销
*新增memory optimize,inplace pass, memory pool early deletion等显存优化策略。
* 新增memory optimize,inplace pass, memory pool early deletion等显存优化策略。
* 新增支持网络无锁更新的Pass。
* 新增QuantizationTransformPass,此为Quantization Aware Training量化模式训练前的图修改操作部分。
* Operator整体层面的优化
......@@ -100,7 +100,7 @@ Paddle Fluid v1.3
模型建设
==========
* addleCV 智能视觉
* PaddleCV 智能视觉
* 新增发布PaddlePaddle视频模型库,包括五个视频分类模型:Attention Cluster、NeXtVLAD、LSTM,、stNet、TSN。提供适合视频分类任务的通用骨架代码,包括数据读取和预处理、训练和预测、网络模型以及指标计算等多个模块。用户根据需要添加自己的网络模型,直接复用其他模块的代码,快速部署模型。
* 新增支持目标检测Mask R-CNN模型,效果与主流实现打平。
* 语义分割DeepLabV3+模型,depthwise_conv op融合,显存优化,显存占用对比上一版本减少50%。
......
......@@ -163,7 +163,7 @@ python setup.py bdist_wheel
pip install --upgrade dist/visualdl-*.whl
```
如果打包和安装遇到其他问题,不安装只想运行Visual DL可以看[这里](https://github.com/PaddlePaddle/VisualDL/blob/develop/docs/how_to_dev_frontend_en.md)
如果打包和安装遇到其他问题,不安装只想运行Visual DL可以看[这里](https://github.com/PaddlePaddle/VisualDL/blob/develop/docs/develop/how_to_dev_frontend_cn.md)
## SDK
......
......@@ -14,7 +14,7 @@ At present, most DNN frameworks use Python as their primary language. VisualDL s
Users can get plentiful visualization results by simply add a few lines of Python code into their model before training.
Besides Python SDK, VisualDL was writen in C++ on the low level. It also provides C++ SDK that
can be integrated into other platforms.
can be integrated into other platforms.
## Component
......@@ -174,7 +174,7 @@ pip install --upgrade dist/visualdl-*.whl
```
If there are still issues regarding the ```pip install```, you can still start Visual DL by starting the dev server
[here](https://github.com/PaddlePaddle/VisualDL/blob/develop/docs/how_to_dev_frontend_en.md)
[here](https://github.com/PaddlePaddle/VisualDL/blob/develop/docs/develop/how_to_dev_frontend_en.md)
## SDK
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册