- 16 4月, 2019 1 次提交
-
-
由 dengkaipeng 提交于
-
- 15 4月, 2019 2 次提交
-
-
由 liuwei1031 提交于
* optimize lstmp and sample_logits op, test=develop * update op_use_default_grad_op_maker.spec, test=develop * delete useless file,test=develop * append 0 to dim variable to avoid memory reusage, test=develop
-
由 Jiabin Yang 提交于
* test=release/1.4, fix hsigmoid dereference nullptr * test=release/1.4, refine condition * test=release/1.4, refine comments
-
- 12 4月, 2019 1 次提交
-
-
由 ruri 提交于
* cherry-pick 16763,test=release/1.4 * cherry-pick 16763, test=release/1.4 * fix some comments, include cosine_decay,l2_normalize,pixel_shuffle * Add api.spec, test=develop * update api.spec, test=develop * add api.spec,test=develop * test=develop * test=develop * fix conflict,test=develop * Add Pixel shuffle OP (#15782) * add pixel_shuffle op * add pixel_shuffle op, test=develop * rewrite code, test=develop * delete useless comment, test=develop * Refine pixel_shuffle_op and unit testing * refine code,test=develop * refine .cu,test=develop * fix unittest,test=develop * Fix unit testing test=develop * resolve conflict, test=develop * fix test, test=develop * fix API, test=develop * fix test datatype bug,test=develop * polish comments,test=develop * add API,test=develop * test=develop * Add Pixel_Shuffle OP,test=develop * support python3,test=develop * add include memory to travis CI bug,test=develop * cherry-pick 16763,15782 , test=release/1.4
-
- 11 4月, 2019 1 次提交
-
-
由 lujun 提交于
-
- 31 3月, 2019 1 次提交
-
-
由 qingqing01 提交于
* Add linear learning warmup method This warmup lr can be combinated with other learning rate strategies. For example: decayed_lr = fluid.layers.linear_lr_warmup( fluid.layers.piecewise_decay(boundaries, lr_steps), warmup_steps, start_lr, end_lr)
-
- 29 3月, 2019 2 次提交
- 28 3月, 2019 3 次提交
-
-
由 lujun 提交于
-
由 minqiyang 提交于
test=develop
-
由 Jiabin Yang 提交于
* test=develop, fix space_to_depth_doc * test=develop, refine indent * test=develop, refine code * test=develop, add api spec
-
- 27 3月, 2019 5 次提交
-
-
由 chengduo 提交于
test=develop
-
由 minqiyang 提交于
test=develop
-
由 minqiyang 提交于
-
由 dengkaipeng 提交于
-
由 lujun 提交于
-
- 26 3月, 2019 2 次提交
-
-
由 Jiabin Yang 提交于
* add layer norm to Layers, add transformer prepare encoding * little change * finish encoder part * add decoder part * finish model part * add test case and part of data feed * add transformer test * add to_parameter, add remove in set_attr * test=develop, fix pos encoding bug, create_parameter with stantard name * test=develop, rm dropout test in imperative * test=develop, fix cpu error * test=develop, fix minize bug * test=develop, fix one hot not stop gradient * test=develop, fix one hot not stop gradient * test=develop, refine parameter name * test=develop, fix transformer test in imperative mode * test=develop, fix transformer test in imperative mode * test=develop, fix boost and mkl download error * test=develop, fix boost and mkl download error * test=develop, fix ci and refine code * test=develop, fix ci and refine code
-
由 whs 提交于
* Add fsp operator. 1 Add unitest. 2. Add python API. 3. Add layer test. * Add quantization strategy. 1. Add API. 2. Add unitest. * Add distillatoin strategy. * Add unitest config file for quantization * Fix Copyright test=develop * Fix setup.py * Fix document of layers.py. test=develop * Fix unitest in python3. test=develop * Fix documents. test=develop * 1. refine fsp op by batched gemm 2. remove unused import test=develop * Fix test_dist_se_resnext. 1. disable test distillation. 2. reset framework.py test=develop * Enable unitest of distillation after fixing Block._clone_variable test=develop * Fix cdn issue. test=develop
-
- 23 3月, 2019 1 次提交
-
-
由 whs 提交于
* Add range op. test=develop * Add more unitests. test=develop * Fix API.spec test=develop * Fix API.spec test=develop * Fix API.spec test=develop
-
- 22 3月, 2019 1 次提交
-
-
由 qingqing01 提交于
-
- 21 3月, 2019 3 次提交
-
-
由 qingqing01 提交于
* Rewrite gradient ProtoMaker for affine_channel_op to remove the Output as the input. * Add act in Python API to make the act can be in-place by layer_help.py
-
由 phlrain 提交于
-
由 phlrain 提交于
-
- 20 3月, 2019 1 次提交
-
-
由 Wu Yi 提交于
* wip allreduce in op * wip * wip * wip * wip adding test * wip for conflict with mp mode * fix tests test=develop * fix cpu build test=develop * fix travis clang format test=develop * fix cpu build test=develop * update api.spec test=develop * delete comment test=develop * fix cpplint test=develop * fix test=develop * follow comment test=develop * add file test=develop * fix build test=develop * update test=develop * to be compatible with sync_bn, and fix mp mode in develop test=develop
-
- 19 3月, 2019 4 次提交
-
-
由 whs 提交于
* Make step_input support custom lod level. test=develop * Fix API.spec test=develop * Fix API.spec. test=develop * Fix API.spec test=develop * Add default value in document of step_input. test=develop * Fix document. test=develop * Fix API.spec test=develop
-
由 zhhsplendid 提交于
test=develop
-
由 ceci3 提交于
-
由 sneaxiy 提交于
test=develop
-
- 18 3月, 2019 4 次提交
-
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 dengkaipeng 提交于
-
由 Xin Pan 提交于
test=develop
-
- 17 3月, 2019 1 次提交
-
-
由 dengkaipeng 提交于
-
- 15 3月, 2019 5 次提交
-
-
由 Xin Pan 提交于
test=develop
-
由 Aurelius84 提交于
-
由 ceci3 提交于
-
由 qingqing01 提交于
* Support Sync Batch Norm. * Note, do not enable it in one device. Usage: build_strategy = fluid.BuildStrategy() build_strategy.sync_batch_norm = True binary = fluid.compiler.CompiledProgram(tp).with_data_parallel( loss_name=loss_mean.name, build_strategy=build_strategy)
-
由 Aurelius84 提交于
-
- 14 3月, 2019 2 次提交