Fluid benchmark & book validation
Created by: dzhwinter
In the 0.11.0 version, we will release the book chapters written with fluid
, there are some tasks need to be done.
Task Lists 1 : compare results with Paddle books V2
Need to validate these books can convergence to the approximate same result with books chapters.
-
book.03 image classification CPU loss validation @jacquesqiao @qingqing01 @kuke
-
book.03 image classification GPU loss validation @jacquesqiao @qingqing01 @kuke
- ResNet
- VGG
-
book.04 word2vec CPU loss validation @peterzhang2029
-
book.04 word2vec GPU loss validation @peterzhang2029
-
book.05 recommendation systems CPU loss validation @typhoonzero
-
book.05 recommendation systems GPU loss validation @typhoonzero
Need to note that we have three different implementation of understand_sentiment, only test the lstm one in this chapter.
-
book.06 understand_sentiment lstm CPU loss validation @ranqiu92
-
book.06 understand_sentiment lstm GPU loss validation @ranqiu92
-
book.07 label semantic roles CPU loss validation @chengduoZH We do not have GPU version label semantic roles implementation.
-
book.08 machine translation CPU loss validation @jacquesqiao @ChunweiYan
-
book.08 machine translation GPU loss validation @jacquesqiao @ChunweiYan
Task Lists How to do
We have benchmark scripts and docker image. So these things should be done quickly and report a bug if you find any issue. (operator implement, convergence result). Because we are still finetuning the performance, so if you find any magnitude gap in performance, please file an issue without hesitation.
scripts are put under this directory, please find the correct chapter name: https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/fluid/tests/book
old books docker image: paddlepaddle/book:latest-gpu
new books docker image: dzhwinter/benchmark:latest