- 04 11月, 2016 6 次提交
-
-
由 gangliao 提交于
-
由 wangkuiyi 提交于
-
由 emailweixu 提交于
For multiple installation of paddle, there might be multiple versions of python package at opt/paddle/share/wheels/. We should install the right version. Ideally, we should remove the wrong versions when install. But it's not easy to do this with cmake. Change-Id: Ida8a8d60643ad9e42cf1c85776de9122d5ba1392
-
由 hedaoyuan 提交于
-
由 lzhao4ever 提交于
* Add matrix inverse
- 03 11月, 2016 3 次提交
-
-
由 emailweixu 提交于
For multiple installation of paddle, there might be multiple versions of python package at opt/paddle/share/wheels/. We should install the right version. Ideally, we should remove the wrong versions when install. But it's not easy to do this with cmake. Change-Id: Ida8a8d60643ad9e42cf1c85776de9122d5ba1392
-
由 Haichao-Zhang 提交于
-
由 Yu Yang 提交于
* Forget to finishTestPeriod in testOnePeriod. * Fix #318
-
- 02 11月, 2016 4 次提交
-
-
由 qingqing01 提交于
-
由 Yu Yang 提交于
-
由 Yu Yang 提交于
-
由 qingqing01 提交于
* Add benchmark for PaddlePaddle, tensorflow and caffe * ConvProjection to reduce memory for goolenet * Add unit test for ConvProjection. 1. unit test in test_LayerGrad. 2. compare the ConvPorjection and CudnnConvLayer, also compare the concat_layer+img_conv_layer and concat_layer_conv_projection. * Reduce cudnn_conv memory and add benchmark document. 1. Use TmpMatrix as the workspace in cudnn_conv to reduce gpu memory. It reduce lots of memory. 2. Add benchmark document. 3. fix smallnet_mnist_cifar.py in paddle. * Add job=time and refine cudnn_conv to reduce gpu memroy and speed up * Refine cudnn_conv and shared biases operation in concat_layer and mixed_layer. * follow comments * follow comments * Use unique_ptr to prevent memory leaks in CudnnConvLayer.
-
- 31 10月, 2016 3 次提交
-
-
由 backyes 提交于
-
由 gangliao 提交于
* DYLD_LIBRARY_PATH is disable after Mac OS X 10.11 * fix clang + gpu compile error on Mac OS * fix some words and errors in build docs
-
由 zhouxiao-coder 提交于
-
- 28 10月, 2016 5 次提交
-
-
由 backyes 提交于
* Because in cluster maybe use a lot machine to train a model, and some parameter size could be too small for ParameterServer. Then some of pservers could not have any ParamBlock. * Also, because ports_num or ports_num_for_sparse is too large, then give a warning in runtime.
-
由 luotao1 提交于
* fix interface bug of block_expand_layer and add unittest * auto compute num_channels * default value of num_channels is None * adjust input order of block_expand
-
由 backyes 提交于
-
由 Yu Yang 提交于
* Change contribute to paddle to fit new branching model
-
- 27 10月, 2016 1 次提交
-
-
由 emailweixu 提交于
* Python trainer API and demo * Adding missing PaddleAPIPrivate.h * Adding api_train.sh * More comments * Bump up patch version to 0b3
-
- 26 10月, 2016 3 次提交
-
-
由 backyes 提交于
* add input sparse data check for sparse layer at runtime, to avoid invalid data access at pserver end while doing prefetch * remote sparse design support binary sparse and float saprse both
-
由 emailweixu 提交于
-
由 emailweixu 提交于
-
- 25 10月, 2016 2 次提交
- 24 10月, 2016 5 次提交
-
-
由 luotao1 提交于
-
由 luotao1 提交于
* add maxout layer, including interface and unittest * follow maxout comments * auto setting channels * fix unittest bug in test_RecurrentGradientMachine
-
由 wenboyang 提交于
* fix error in doc of quick_start * There are some warning when execute preprocess.sh
-
由 luotao1 提交于
-
由 luotao1 提交于
-
- 21 10月, 2016 1 次提交
-
-
由 alvations 提交于
-
- 19 10月, 2016 2 次提交
- 18 10月, 2016 3 次提交
- 17 10月, 2016 2 次提交
-
-
由 emailweixu 提交于
* Fix sparse training for trainer_count=1 For trainer_count=1, the gradient machine is NeuralNetwork, which does not create parameter buf for PARAMETER_GRADIENT for sparse update in Parameter::enableType. But gradient parameter buf is still used in SgdThreadUpdater. * Minor update to comment
-
由 emailweixu 提交于
-