Release log
Major Improvements
Reader Prototype. Data can be read through C++ reader asynchronously with potentially higher performance.
ParallelExecutor. Significantly improve the multi-gpu performance over the previous solution.
Distributed Training. Major performance improvements and stability improvements.
Inplace Activation. Significantly reduce the GPU memory requirements and increase the batch size.
Operator Optimizations. Performance improvements of many operators.
Timeline Profiling. Allow to visualize performance as time series.
Major Bug Fixes
Calling cublas/cudnn library with wrong argument types.
Evaluated Models
Image Classification
Object Detection
OCR
Machine Translation
Text Classification
Language Model
Sequence Tagging
项目简介
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
源项目地址