多机训练image_classification时,如果打开内存优化报错
Created by: kolinwei
Traceback (most recent call last): File "dist_image_classification.py", line 139, in runtime_main(TestDistImageClassification) File "/work/weike/svn/baidu/paddle/test/cts_test/dist_base.py", line 243, in runtime_main model.run_trainer(endpoints, trainer_id, trainers, run_params) File "/work/weike/svn/baidu/paddle/test/cts_test/dist_base.py", line 176, in run_trainer build_strategy=build_strategy) File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/parallel_executor.py", line 165, in init build_strategy, num_trainers, trainer_id) paddle.fluid.core.EnforceNotMet: can not find right place for distributed op: conv2d_grad at [/paddle/paddle/fluid/framework/details/multi_devices_graph_pass.cc:754] PaddlePaddle Call Stacks: 0 0x7f8c84172e86p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 486 1 0x7f8c84f91fe9p paddle::framework::details::MultiDevSSAGraphBuilder::CreateDistTrainOp(paddle::framework::ir::Graph*, paddle::framework::ir::Node*) const + 2345 2 0x7f8c84f95469p paddle::framework::details::MultiDevSSAGraphBuilder::ApplyImpl(std::unique_ptr<paddle::framework::ir::Graph, std::default_deletepaddle::framework::ir::Graph >) const + 2025