README.md 4.3 KB
Newer Older
Z
zhangjinchao01 已提交
1 2
# PaddlePaddle

G
gangliao 已提交
3

G
gangliao 已提交
4
[![Build Status](https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop)](https://travis-ci.org/PaddlePaddle/Paddle)
Y
Yi Wang 已提交
5 6
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](http://www.paddlepaddle.org/develop/doc/)
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](http://www.paddlepaddle.org/doc_cn/)
G
gangliao 已提交
7 8
[![Coverage Status](https://coveralls.io/repos/github/PaddlePaddle/Paddle/badge.svg?branch=develop)](https://coveralls.io/github/PaddlePaddle/Paddle?branch=develop)
[![Release](https://img.shields.io/github/release/PaddlePaddle/Paddle.svg)](https://github.com/PaddlePaddle/Paddle/releases)
L
liaogang 已提交
9 10
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)

E
emailweixu 已提交
11

G
gangliao 已提交
12
Welcome to the PaddlePaddle GitHub.
E
emailweixu 已提交
13

Z
zhangjinchao01 已提交
14 15 16 17 18
PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use,
efficient, flexible and scalable deep learning platform, which is originally
developed by Baidu scientists and engineers for the purpose of applying deep
learning to many products at Baidu.

G
gangliao 已提交
19
Our vision is to enable deep learning for everyone via PaddlePaddle.
G
gangliao 已提交
20
Please refer to our [release announcement](https://github.com/PaddlePaddle/Paddle/releases) to track the latest feature of PaddlePaddle.
G
gangliao 已提交
21

Z
zhangjinchao01 已提交
22 23 24 25
## Features

- **Flexibility**

G
gangliao 已提交
26 27 28 29
    PaddlePaddle supports a wide range of neural network architectures and
    optimization algorithms. It is easy to configure complex models such as
    neural machine translation model with attention mechanism or complex memory
    connection.
Z
zhangjinchao01 已提交
30 31

-  **Efficiency**
32

G
gangliao 已提交
33 34 35 36 37 38
    In order to unleash the power of heterogeneous computing resource,
    optimization occurs at different levels of PaddlePaddle, including
    computing, memory, architecture and communication. The following are some
    examples:

      - Optimized math operations through SSE/AVX intrinsics, BLAS libraries
39 40
      (e.g. MKL, ATLAS, cuBLAS) or customized CPU/GPU kernels.
      - Highly optimized recurrent networks which can handle **variable-length**
G
gangliao 已提交
41 42 43
      sequence without padding.
      - Optimized local and distributed training for models with high dimensional
      sparse data.
Z
zhangjinchao01 已提交
44 45 46

- **Scalability**

G
gangliao 已提交
47 48 49
    With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed
    up your training. PaddlePaddle can achieve high throughput and performance
    via optimized communication.
Z
zhangjinchao01 已提交
50 51 52

- **Connected to Products**

G
gangliao 已提交
53 54 55 56 57 58 59
    In addition, PaddlePaddle is also designed to be easily deployable. At Baidu,
    PaddlePaddle has been deployed into products or service with a vast number
    of users, including ad click-through rate (CTR) prediction, large-scale image
    classification, optical character recognition(OCR), search ranking, computer
    virus detection, recommendation, etc. It is widely utilized in products at
    Baidu and it has achieved a significant impact. We hope you can also exploit
    the capability of PaddlePaddle to make a huge impact for your product.
Z
zhangjinchao01 已提交
60 61

## Installation
Y
Yi Wang 已提交
62 63 64 65 66

It is recommended to check out the
[Docker installation guide](http://www.paddlepaddle.org/develop/doc/getstarted/build_and_install/docker_install_en.html)
before looking into the
[build from source guide](http://www.paddlepaddle.org/develop/doc/getstarted/build_and_install/build_from_source_en.html)
67

Z
zhangjinchao01 已提交
68
## Documentation
69

Y
Yi Wang 已提交
70 71 72 73 74 75 76 77 78 79 80 81
We provide [English](http://www.paddlepaddle.org/develop/doc/) and
[Chinese](http://www.paddlepaddle.org/doc_cn/) documentation.

- [Deep Learning 101](http://book.paddlepaddle.org/index.en.html)

  You might want to start from the this online interactive book that can run in Jupyter Notebook.

- [Distributed Training](http://www.paddlepaddle.org/develop/doc/howto/usage/cluster/cluster_train_en.html)

  You can run distributed training jobs on MPI clusters.

- [Distributed Training on Kubernetes](http://www.paddlepaddle.org/develop/doc/howto/usage/k8s/k8s_en.html)
82

Y
Yi Wang 已提交
83
   You can also run distributed training jobs on Kubernetes clusters.
84

Y
Yi Wang 已提交
85
- [Python API](http://www.paddlepaddle.org/develop/doc/api/index_en.html)
86

Y
Yi Wang 已提交
87
   Our new API enables much shorter programs.
88

Y
Yi Wang 已提交
89
- [How to Contribute](http://www.paddlepaddle.org/develop/doc/howto/dev/contribute_to_paddle_en.html)
Z
zhangjinchao01 已提交
90

Y
Yi Wang 已提交
91
   We appreciate your contributions!
92

Z
zhangjinchao01 已提交
93
## Ask Questions
94

G
gangliao 已提交
95
You are welcome to submit questions and bug reports as [Github Issues](https://github.com/PaddlePaddle/Paddle/issues).
Z
zhangjinchao01 已提交
96 97 98

## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).