- 10 8月, 2020 1 次提交
-
-
由 tangwei12 提交于
* add paddle.fleet.AsyncOptimizer Co-authored-by: Ndongdaxiang <dongdaxiang@baidu.com>
-
- 05 8月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* generate context during compile
-
- 20 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
refactor fleet api under paddle.fleet update DistributedStrategy
-
- 06 7月, 2020 1 次提交
-
-
由 Dong Daxiang 提交于
* add paddle.fleet.DistributedStrategy for 2.0
-
- 13 5月, 2020 1 次提交
-
-
由 LielinJiang 提交于
* add vision * fix predict, test=develop * add unittest for vision apis, test=develop * fix typos * add hapi models api, test=develop * fix code format, test=develop * fix typos, test=develop * fix sample code import, test=develop * fix sample codes, test=develop * add decompress, test=develop * rm darknet, test=develop * rm debug code, test=develop
-
- 11 5月, 2020 1 次提交
-
-
由 qingqing01 提交于
* Merge hapi into Paddle Hapi is a high level API for training and inference. The main modules include Model, Loss, Metrics, Dataset. Also includes common modules and models in NLP and computer vision, such as BERT, ResNet. These modules are developed by: 0YuanZhang0, guoshengCS heavengate, LielinJiang, qingqing01, xyzhou-puck huangjun12, wangxiao1021, zhangyang.
-
- 23 3月, 2020 1 次提交
-
-
由 XiaoguangHu 提交于
-
- 17 9月, 2018 1 次提交
-
-
由 yuyang 提交于
-
- 03 9月, 2018 1 次提交
-
-
由 chengduo 提交于
* fix high level API(Inference) bug * patch the unit tests
-
- 15 8月, 2018 1 次提交
-
-
由 minqiyang 提交于
-
- 26 7月, 2018 1 次提交
-
-
由 minqiyang 提交于
-
- 18 6月, 2018 1 次提交
-
-
由 qiaolongfei 提交于
-
- 09 6月, 2018 1 次提交
-
-
由 Jeff Wang 提交于
* Use for_test=True in the Fluid Trainer to clone the test program * fix typo * Should do the samething to the inferencer
-
- 24 5月, 2018 1 次提交
-
-
由 daminglu 提交于
-
- 18 5月, 2018 1 次提交
-
-
由 qiaolongfei 提交于
-
- 17 5月, 2018 2 次提交
-
-
由 qiaolongfei 提交于
-
由 qiaolongfei 提交于
-
- 16 5月, 2018 1 次提交
-
-
由 daminglu 提交于
-
- 12 5月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
* add Inference.infer * optimize code * update no_test_word2vec_new_api.py * update trainer * split check_and_get_place * use inference_program to save inference model in Trainer * update demo * update save_inference_model * clean code
-
- 05 5月, 2018 1 次提交
-
-
由 Jeff Wang 提交于
* Load/save the params from the params_path * Switch to use load_persistables and save_persistables * Instaed of setup the executor to run program and scope. Pass the program to the load_persistables
-
- 03 5月, 2018 1 次提交
-
-
由 Helin Wang 提交于
- The trainer and inferencer will load params from disk if param_path argument is not None in their constructor. - Remove params.py, we will expose core.Scope to the user if needed (e.g., for GAN). Currently we will not expose it, unless we clearly know doing so can support GAN. - Add `save_params` to Trainer (a TODO item). - rename "network" to "program"
-
- 02 5月, 2018 4 次提交
-
-
由 Helin Wang 提交于
-
由 Helin Wang 提交于
-
由 Helin Wang 提交于
-
由 Helin Wang 提交于
-
- 12 2月, 2018 1 次提交
-
-
由 qingqing01 提交于
-
- 21 1月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* "fix decode bug" * "follow commnet" * "fix error" * "fix hook bug" * fix based comment * fix copyright * fix based on comment
-
- 15 1月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* add copyright hook * add copyright hook * refine copyright hook * "test copyright hook" * fix check style * fix ci
-
- 12 11月, 2016 1 次提交
-
-
由 qijun 提交于
-
- 28 9月, 2016 1 次提交
-
-
由 Yu Yang 提交于
* Fix lots of trainer_config_helpers bug, and complete unittest for `layers.py`
-