1. 12 11月, 2016 1 次提交
  2. 11 11月, 2016 1 次提交
    • X
      '*' operator overload for LayerOutput · 36fa2517
      xuwei06 提交于
      Making '*' support the multiplication between a scalar and LayerOutput
      
      Also changing '+' to support adding between a vector and a scalar.
      
      Change-Id: I7daf35590dc2b2f855a29d9ef43ac57979442e0f
      36fa2517
  3. 10 11月, 2016 11 次提交
  4. 08 11月, 2016 2 次提交
    • Q
      add python wrap for sppLayer · 5ece5c96
      qijun 提交于
      5ece5c96
    • X
      Add SumCost · ebad8e52
      xuwei06 提交于
      This allows user to implement any type of cost by summing over the output of non-cost layers.
      
      Change-Id: Ic55aaabbf0c1299e70b8e48a0effcc91f8f5bd29
      ebad8e52
  5. 07 11月, 2016 2 次提交
  6. 05 11月, 2016 1 次提交
    • E
      Add elementwise math operations (#343) · 6c3a678c
      emailweixu 提交于
      * Add elementwise math operations
      This allows use to use expressions like: y=log(1+exp(x))
      Also added unittests for ActivationFunction
      * Enforce keyword arguments for non-positional arguments
      * Add LogActivation to doc
      6c3a678c
  7. 02 11月, 2016 1 次提交
    • Q
      Add job=time in trainer, refine cudnn_conv to reduce gpu memory and speed up training. (#218) · 45c81a41
      qingqing01 提交于
      * Add benchmark for PaddlePaddle, tensorflow and caffe
      
      * ConvProjection to reduce memory for goolenet
      
      * Add unit test for ConvProjection.
      1. unit test in test_LayerGrad.
      2. compare the ConvPorjection and CudnnConvLayer, also compare the concat_layer+img_conv_layer and concat_layer_conv_projection.
      
      * Reduce cudnn_conv memory and add benchmark document.
      1. Use TmpMatrix as the workspace in cudnn_conv to reduce gpu memory. It reduce lots of memory.
      2. Add benchmark document.
      3. fix smallnet_mnist_cifar.py in paddle.
      
      * Add job=time and refine cudnn_conv to reduce gpu memroy and speed up
      
      * Refine cudnn_conv and shared biases operation in concat_layer and mixed_layer.
      
      * follow comments
      
      * follow comments
      
      * Use unique_ptr to prevent memory leaks in CudnnConvLayer.
      45c81a41
  8. 30 10月, 2016 1 次提交
  9. 24 10月, 2016 1 次提交
  10. 17 10月, 2016 2 次提交
  11. 09 10月, 2016 1 次提交
  12. 28 9月, 2016 1 次提交
  13. 22 9月, 2016 1 次提交
  14. 21 9月, 2016 1 次提交
  15. 20 9月, 2016 3 次提交
    • Y
      Optional fields to shrink generated proto size (#93) · 2c5a6ac0
      Yu Yang 提交于
      * remove unnecessary field set in ParameterConfig, Evaluators, etc
      2c5a6ac0
    • H
      split dotmul_projection and dotmul_operator (#87) · 159dd833
      Haichao-Zhang 提交于
      * split dotmul_projection and dotmul_operator
      * bug fix in outputsize checking for mixed layer
      159dd833
    • Y
      Add min_pool_size, Add default value of should_shuffle (#70) · 90b9cba7
      Yu Yang 提交于
      * min_pool_size would be infinite by default.
        * add unittest for min_pool_size
      * Fix bug in can_over_batch_size
        * add unittest for can_over_batch_size
      * Add DEFINE_PROVIDER_EX
      * Add default value of should_shuffle
        * When training, the default value of should_shuffle is True.
        * When testing, the default value of should_shuffle is False.
        * User a set a provider should_shuffle or not by pass it to `@provider`
        * should_shuffle can handle a list of value, not just boolean
      * Add input order mapping by using name
        * Add unittest
      * Add check to check input format.
        * Default is close for speed reason.
        * User could stop train when check error, or continue train without
          this train sample.
      * use deque instead of vector in generators pool, make erase
        generator faster.
      * Add chinese/english documentation
      * Make should shuffle = false in unittest
      * Add python files to depends.
      90b9cba7
  16. 17 9月, 2016 1 次提交
  17. 14 9月, 2016 3 次提交
  18. 08 9月, 2016 1 次提交
  19. 29 8月, 2016 1 次提交