提交 c4b28f47 编写于 作者: Y Yu Yang 提交者: GitHub

Merge pull request #1957 from wangkuiyi/polish-release-note

Polish release notes
- repo: https://github.com/Lucas-C/pre-commit-hooks.git
sha: c25201a00e6b0514370501050cf2a8538ac12270
sha: v1.0.1
hooks:
- id: remove-crlf
files: (?!.*third_party)^.*$ | (?!.*book)^.*$
- repo: https://github.com/reyoung/mirrors-yapf.git
sha: v0.13.2
hooks:
- id: yapf
files: (.*\.(py|bzl)|BUILD|.*\.BUILD|WORKSPACE)$
- id: yapf
files: (.*\.(py|bzl)|BUILD|.*\.BUILD|WORKSPACE)$
- repo: https://github.com/pre-commit/pre-commit-hooks
sha: 7539d8bd1a00a3c1bfd34cdb606d3a6372e83469
sha: 5bf6c09bfa1297d3692cadd621ef95f1284e33c0
hooks:
- id: check-added-large-files
- id: check-merge-conflict
......
# Release v0.10.0
We are glad to release version 0.10.0. In this version, we are happy to
release the
new
[Python API](http://research.baidu.com/paddlepaddles-new-api-simplifies-deep-learning-programs/).
- Our old Python API is kind of out of date. It's hard to learn and hard to
use. To write a PaddlePaddle program using the old API, we'd have to write
at least two Python files: one `data provider` and another one that defines
the network topology. Users start a PaddlePaddle job by running the
`paddle_trainer` C++ program, which calls Python interpreter to run the
network topology configuration script and then start the training loop,
which iteratively calls the data provider function to load minibatches.
This prevents us from writing a Python program in a modern way, e.g., in the
Jupyter Notebook.
- The new API, which we often refer to as the *v2 API*, allows us to write
much shorter Python programs to define the network and the data in a single
.py file. Also, this program can run in Jupyter Notebook, since the entry
point is in Python program and PaddlePaddle runs as a shared library loaded
and invoked by this Python program.
Basing on the new API, we delivered an online interative
book, [Deep Learning 101](http://book.paddlepaddle.org/index.en.html)
and [its Chinese version](http://book.paddlepaddle.org/).
We also worked on updating our online documentation to describe the new API.
But this is an ongoing work. We will release more documentation improvements
in the next version.
We also worked on bring the new API to distributed model training (via MPI and
Kubernetes). This work is ongoing. We will release more about it in the next
version.
## New Features
* We release [new python API](http://research.baidu.com/paddlepaddles-new-api-simplifies-deep-learning-programs/).
* We release [new Python API](http://research.baidu.com/paddlepaddles-new-api-simplifies-deep-learning-programs/).
* Deep Learning 101 book in [English](http://book.paddlepaddle.org/index.en.html) and [Chinese](http://book.paddlepaddle.org/).
* Support rectangle input for CNN.
* Support stride pooling for seqlastin and seqfirstin.
* Expose seq_concat_layer/seq_reshape_layer in `trainer_config_helpers`.
* Add dataset package
- CIFAR, MNIST, IMDB, WMT14, CONLL05, movielens, imikolov.
* Expose `seq_concat_layer/seq_reshape_layer` in `trainer_config_helpers`.
* Add dataset package: CIFAR, MNIST, IMDB, WMT14, CONLL05, movielens, imikolov.
* Add Priorbox layer for Single Shot Multibox Detection.
* Add smooth L1 cost.
* Add data reader creator and data reader decorator for v2 API.
* Add the cpu implementation of cmrnorm-projection.
* Add the CPU implementation of cmrnorm projection.
## Improvements
* Support python virtualenv for `paddle_trainer` process.
* Support Python virtualenv for `paddle_trainer`.
* Add pre-commit hooks, used for automatically format our code.
* Use Protobuf 3.X as the default Paddle Protobuf version.
* Add an option to check data type in python data provider.
* Upgrade protobuf to version 3.x.
* Add an option to check data type in Python data provider.
* Speedup the backward of average layer on GPU.
* Reorganize the catalog of doc/ and refine several docs.
* Add Travis-CI for checking dead links.
* Add a example for explaining sparse_vector.
* Add Relu in layer_math.py
* Simplify data processing flow for quick start.
* Documentation refinement.
* Check dead links in documents using Travis-CI.
* Add a example for explaining `sparse_vector`.
* Add ReLU in layer_math.py
* Simplify data processing flow for Quick Start.
* Support CUDNN Deconv.
* Add data feeder for v2 API.
* Add data feeder in v2 API.
* Support predicting the samples from sys.stdin for sentiment demo.
* Provide multi-proccess interface for image preprocessing.
* Add benchmark document for v1 API.
* Add Relu in layer_math.py.
* Add ReLU in `layer_math.py`.
* Add packages for automatically downloading public datasets.
* Rename Argument::sumCost to Argument::sum since Argument does not have to have any relationship with cost.
* Expose Argument::sum to Python
* Rename `Argument::sumCost` to `Argument::sum` since class `Argument` is nothing with cost.
* Expose Argument::sum to Python
* Add a new `TensorExpression` implementation for matrix-related expression evaluations.
* Add Lazy Assignment for optimize the calculation of multiple expressions.
* Add `Function` to reconstruct the computation function.
* PadFunc and PadGradFunc.
* ContextProjectionForwardFunc and ContextProjectionBackwardFunc.
* CosSimBackward and CosSimBackwardFunc.
* CrossMapNormalFunc and CrossMapNormalGradFunc.
* MulFunc.
* Add `AutoCompare` and `FunctionCompare`, which make it easier to write unittest for comparing gpu and cpu version of a function.
* Add `libpaddle_test_main.a` and remove the main function inside the test file.
* Add lazy assignment for optimizing the calculation of a batch of multiple expressions.
* Add abstract calss `Function` and its implementation:
* `PadFunc` and `PadGradFunc`.
* `ContextProjectionForwardFunc` and `ContextProjectionBackwardFunc`.
* `CosSimBackward` and `CosSimBackwardFunc`.
* `CrossMapNormalFunc` and `CrossMapNormalGradFunc`.
* `MulFunc`.
* Add class `AutoCompare` and `FunctionCompare`, which make it easier to write unit tests for comparing gpu and cpu version of a function.
* Generate `libpaddle_test_main.a` and remove the main function inside the test file.
* Support dense numpy vector in PyDataProvider2.
* Clean code base, remove some copy & paste codes before.
* Extract RowBuffer class for SparseRowMatrix.
* Clean GradientMachine's interface.
* Try use `override` keyword in layer.
* Simplify Evaluator::create, use `ClassRegister` to create Evaluator.
* Add md5 check when downloading demo's dataset.
* Clean code base, remove some copy-n-pasted code snippets:
* Extract `RowBuffer` class for `SparseRowMatrix`.
* Clean the interface of `GradientMachine`.
* Use `override` keyword in layer.
* Simplify `Evaluator::create`, use `ClassRegister` to create `Evaluator`s.
* Check MD5 checksum when downloading demo's dataset.
* Add `paddle::Error` which intentially replace `LOG(FATAL)` in Paddle.
## Bug Fixes
* Add layer check for recurrent_group.
* Clang-format off on some cuda .cc files.
* Fix LogActivation which is not defined.
* Fix bug when run test_layerHelpers multiple times.
* Fix protobuf size limit on seq2seq demo.
* Fix bug for dataprovider converter in GPU mode.
* Fix bug in GatedRecurrentLayer which only occurs in predicting or `job=test` mode.
* Fix bug for BatchNorm when testing more than models in test mode.
* Fix unit test of paramRelu.
* Fix some warning about CpuSparseMatrix.
* Fix MultiGradientMachine error if trainer_count > batch_size.
* Fix when async load data in PyDataProvider2.
* Check layer input types for `recurrent_group`.
* Don't run `clang-format` with .cu source files.
* Fix bugs with `LogActivation`.
* Fix the bug that runs `test_layerHelpers` multiple times.
* Fix the bug that the seq2seq demo exceeds protobuf message size limit.
* Fix the bug in dataprovider converter in GPU mode.
* Fix a bug in `GatedRecurrentLayer`.
* Fix bug for `BatchNorm` when testing more than one models.
* Fix broken unit test of paramRelu.
* Fix some compile-time warnings about `CpuSparseMatrix`.
* Fix `MultiGradientMachine` error when `trainer_count > batch_size`.
* Fix bugs that prevents from asynchronous data loading in `PyDataProvider2`.
# Release v0.9.0
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册