1. 01 12月, 2016 1 次提交
    • X
      Fix --job=test · 66520f8b
      xuwei06 提交于
      When the define_py_data_sources2 has both train_list and test_list,
      for job=test, the trainer will create both dataProvider_ and testDataProvider_.
      But dataProvider_ is not used. This causes SIGSEGV at finishAsync() because asyncLoader_ is not created.
      
      Change-Id: If579f715f80a70ebc795094792c3436bfa0f5746
      66520f8b
  2. 22 11月, 2016 1 次提交
  3. 13 11月, 2016 1 次提交
  4. 12 11月, 2016 1 次提交
  5. 03 11月, 2016 1 次提交
  6. 02 11月, 2016 1 次提交
    • Q
      Add job=time in trainer, refine cudnn_conv to reduce gpu memory and speed up training. (#218) · 45c81a41
      qingqing01 提交于
      * Add benchmark for PaddlePaddle, tensorflow and caffe
      
      * ConvProjection to reduce memory for goolenet
      
      * Add unit test for ConvProjection.
      1. unit test in test_LayerGrad.
      2. compare the ConvPorjection and CudnnConvLayer, also compare the concat_layer+img_conv_layer and concat_layer_conv_projection.
      
      * Reduce cudnn_conv memory and add benchmark document.
      1. Use TmpMatrix as the workspace in cudnn_conv to reduce gpu memory. It reduce lots of memory.
      2. Add benchmark document.
      3. fix smallnet_mnist_cifar.py in paddle.
      
      * Add job=time and refine cudnn_conv to reduce gpu memroy and speed up
      
      * Refine cudnn_conv and shared biases operation in concat_layer and mixed_layer.
      
      * follow comments
      
      * follow comments
      
      * Use unique_ptr to prevent memory leaks in CudnnConvLayer.
      45c81a41
  7. 28 10月, 2016 2 次提交
  8. 27 10月, 2016 1 次提交
    • E
      Python trainer api (#193) · cbe734b3
      emailweixu 提交于
      * Python trainer API and demo
      
      * Adding missing PaddleAPIPrivate.h
      
      * Adding api_train.sh
      
      * More comments
      
      * Bump up patch version to 0b3
      cbe734b3
  9. 24 10月, 2016 1 次提交
  10. 17 10月, 2016 1 次提交
    • E
      Fix sparse training for trainer_count=1 (#204) · 28bc05b1
      emailweixu 提交于
      * Fix sparse training for trainer_count=1
      
      For trainer_count=1, the gradient machine is NeuralNetwork, which does not create parameter buf for PARAMETER_GRADIENT for sparse update in Parameter::enableType. But gradient parameter buf is still used in SgdThreadUpdater.
      
      * Minor update to comment
      28bc05b1
  11. 13 10月, 2016 1 次提交
  12. 10 10月, 2016 1 次提交
  13. 29 9月, 2016 3 次提交
  14. 28 9月, 2016 1 次提交
  15. 27 9月, 2016 1 次提交
  16. 24 9月, 2016 1 次提交
  17. 20 9月, 2016 3 次提交
    • Y
      Optional fields to shrink generated proto size (#93) · 2c5a6ac0
      Yu Yang 提交于
      * remove unnecessary field set in ParameterConfig, Evaluators, etc
      2c5a6ac0
    • L
      d2e1b46f
    • Y
      Add min_pool_size, Add default value of should_shuffle (#70) · 90b9cba7
      Yu Yang 提交于
      * min_pool_size would be infinite by default.
        * add unittest for min_pool_size
      * Fix bug in can_over_batch_size
        * add unittest for can_over_batch_size
      * Add DEFINE_PROVIDER_EX
      * Add default value of should_shuffle
        * When training, the default value of should_shuffle is True.
        * When testing, the default value of should_shuffle is False.
        * User a set a provider should_shuffle or not by pass it to `@provider`
        * should_shuffle can handle a list of value, not just boolean
      * Add input order mapping by using name
        * Add unittest
      * Add check to check input format.
        * Default is close for speed reason.
        * User could stop train when check error, or continue train without
          this train sample.
      * use deque instead of vector in generators pool, make erase
        generator faster.
      * Add chinese/english documentation
      * Make should shuffle = false in unittest
      * Add python files to depends.
      90b9cba7
  18. 12 9月, 2016 1 次提交
  19. 09 9月, 2016 1 次提交
  20. 08 9月, 2016 2 次提交
  21. 31 8月, 2016 1 次提交
  22. 30 8月, 2016 2 次提交
  23. 29 8月, 2016 1 次提交