README.md 5.0 KB
Newer Older
Z
zhangjinchao01 已提交
1 2
# PaddlePaddle

G
gangliao 已提交
3

G
gangliao 已提交
4
[![Build Status](https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop)](https://travis-ci.org/PaddlePaddle/Paddle)
L
Luo Tao 已提交
5 6
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](http://www.paddlepaddle.org/docs/develop/documentation/en/getstarted/index_en.html)
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](http://www.paddlepaddle.org/docs/develop/documentation/zh/getstarted/index_cn.html)
G
gangliao 已提交
7
[![Release](https://img.shields.io/github/release/PaddlePaddle/Paddle.svg)](https://github.com/PaddlePaddle/Paddle/releases)
L
liaogang 已提交
8 9
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)

E
emailweixu 已提交
10

G
gangliao 已提交
11
Welcome to the PaddlePaddle GitHub.
E
emailweixu 已提交
12

Z
zhangjinchao01 已提交
13 14 15 16 17
PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use,
efficient, flexible and scalable deep learning platform, which is originally
developed by Baidu scientists and engineers for the purpose of applying deep
learning to many products at Baidu.

G
gangliao 已提交
18
Our vision is to enable deep learning for everyone via PaddlePaddle.
G
gangliao 已提交
19
Please refer to our [release announcement](https://github.com/PaddlePaddle/Paddle/releases) to track the latest feature of PaddlePaddle.
G
gangliao 已提交
20

X
Xin Pan 已提交
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

### Latest PaddlePaddle Release: [Fluid 0.14.0](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid)
### Install Latest Stable Release:
```
# Linux CPU
pip install paddlepaddle
# Linux GPU cuda9cudnn7
pip install paddlepaddle-gpu
# Linux GPU cuda8cudnn7
pip install paddlepaddle-gpu==0.14.0.post87
# Linux GPU cuda8cudnn5
pip install paddlepaddle-gpu==0.14.0.post85

# For installation on other platform, refer to http://paddlepaddle.org/
```
X
Xin Pan 已提交
36

Z
zhangjinchao01 已提交
37 38 39 40
## Features

- **Flexibility**

G
gangliao 已提交
41 42 43 44
    PaddlePaddle supports a wide range of neural network architectures and
    optimization algorithms. It is easy to configure complex models such as
    neural machine translation model with attention mechanism or complex memory
    connection.
Z
zhangjinchao01 已提交
45 46

-  **Efficiency**
47

G
gangliao 已提交
48 49 50 51 52 53
    In order to unleash the power of heterogeneous computing resource,
    optimization occurs at different levels of PaddlePaddle, including
    computing, memory, architecture and communication. The following are some
    examples:

      - Optimized math operations through SSE/AVX intrinsics, BLAS libraries
L
Luo Tao 已提交
54
      (e.g. MKL, OpenBLAS, cuBLAS) or customized CPU/GPU kernels.
55
      - Optimized CNN networks through MKL-DNN library.
56
      - Highly optimized recurrent networks which can handle **variable-length**
G
gangliao 已提交
57 58 59
      sequence without padding.
      - Optimized local and distributed training for models with high dimensional
      sparse data.
Z
zhangjinchao01 已提交
60 61 62

- **Scalability**

G
gangliao 已提交
63 64 65
    With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed
    up your training. PaddlePaddle can achieve high throughput and performance
    via optimized communication.
Z
zhangjinchao01 已提交
66 67 68

- **Connected to Products**

G
gangliao 已提交
69
    In addition, PaddlePaddle is also designed to be easily deployable. At Baidu,
70
    PaddlePaddle has been deployed into products and services with a vast number
G
gangliao 已提交
71 72 73
    of users, including ad click-through rate (CTR) prediction, large-scale image
    classification, optical character recognition(OCR), search ranking, computer
    virus detection, recommendation, etc. It is widely utilized in products at
74 75
    Baidu and it has achieved a significant impact. We hope you can also explore
    the capability of PaddlePaddle to make an impact on your product.
Z
zhangjinchao01 已提交
76 77

## Installation
Y
Yi Wang 已提交
78 79

It is recommended to check out the
80
[Docker installation guide](http://www.paddlepaddle.org/docs/develop/documentation/fluid/en/build_and_install/docker_install_en.html)
Y
Yi Wang 已提交
81
before looking into the
82
[build from source guide](http://www.paddlepaddle.org/docs/develop/documentation/fluid/en/build_and_install/build_from_source_en.html).
83

Z
zhangjinchao01 已提交
84
## Documentation
85

86 87
We provide [English](http://www.paddlepaddle.org/docs/develop/documentation/en/getstarted/index_en.html) and
[Chinese](http://www.paddlepaddle.org/docs/develop/documentation/zh/getstarted/index_cn.html) documentation.
Y
Yi Wang 已提交
88

89
- [Deep Learning 101](http://www.paddlepaddle.org/docs/develop/book/01.fit_a_line/index.html)
Y
Yi Wang 已提交
90

91
  You might want to start from this online interactive book that can run in a Jupyter Notebook.
Y
Yi Wang 已提交
92

S
Shan Yi 已提交
93
- [Distributed Training](http://www.paddlepaddle.org/docs/develop/documentation/en/howto/cluster/index_en.html)
Y
Yi Wang 已提交
94 95 96

  You can run distributed training jobs on MPI clusters.

S
Shan Yi 已提交
97
- [Distributed Training on Kubernetes](http://www.paddlepaddle.org/docs/develop/documentation/en/howto/cluster/multi_cluster/k8s_en.html)
98

Y
Yi Wang 已提交
99
   You can also run distributed training jobs on Kubernetes clusters.
100

S
Shan Yi 已提交
101
- [Python API](http://www.paddlepaddle.org/docs/develop/api/en/overview.html)
102

Y
Yi Wang 已提交
103
   Our new API enables much shorter programs.
104

S
Shan Yi 已提交
105
- [How to Contribute](http://www.paddlepaddle.org/docs/develop/documentation/fluid/en/dev/contribute_to_paddle_en.html)
Z
zhangjinchao01 已提交
106

Y
Yi Wang 已提交
107
   We appreciate your contributions!
108

L
LiuYongFeng 已提交
109

Z
zhangjinchao01 已提交
110
## Ask Questions
111

G
gangliao 已提交
112
You are welcome to submit questions and bug reports as [Github Issues](https://github.com/PaddlePaddle/Paddle/issues).
Z
zhangjinchao01 已提交
113 114 115

## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).