- 31 8月, 2022 16 次提交
-
-
由 feifei-111 提交于
* test=kunlun * test=kunlun
-
由 feifei-111 提交于
* test=kunlun * test=kunlun
-
由 kangguangli 提交于
* migrate compare op(E/NE/LE/LT/GE/GT) from fluid to phi * fix compile error * fix memory alloc; test=kunlun * refine code; test=kunlun * replace PADDLE_ENFORCE with PADDLE_ENFORCE_XDNN_SUCCESS
-
由 六个骨头 提交于
-
由 Aurelius84 提交于
* [OpAttr]output_size of unpool support Tensor type * fix coverage * fix contain_var * fix coverage
-
由 WangZhen 提交于
* Move XPU momentum to phi, test=kunlun * Fix mu type, test=kunlun
-
由 Aurelius84 提交于
* [XPU]Migrate argsort and arg_max XPU kernel into Phi * test=kunlun * test=kunlun
-
由 xiongkun 提交于
* transfer concat kernel * test=kunlun * test=kunlun * test=kunlun * test=kunlun
-
由 zhangbo9674 提交于
-
由 Leo Chen 提交于
-
由 LiYuRio 提交于
-
由 james 提交于
* move pool/pool_grad xpu kernel to phi, test=kunlun * replace mutable_data() with DeviceContext::Alloc() * replace PADDLE_ENFORCE_EQ with PADDLE_ENFORCE_XDNN, test=kunlun * adjust function param name & update include header * remove pool_op_xpu.cc * fire r200 test * minor, test=kunlun
-
由 Charles-hit 提交于
* fix split bug * solve function redefine * fix fluid.layers.split and add unit test * delete splitInferMeta register in unary.cc * modify test_split_op GPU unit test * modify test_split_op GPU unit test place param * refactor split op and fix infershape bugs * add () in && and || * fix split C++ unit test * fix split infershape
-
由 WangZhen 提交于
* Move XPU mean and mean_grad to phi, test=kunlun * Fix stream, test=kunlun * Replace ENFORCE, test=kunlun
-
由 Wilber 提交于
-
由 Li Min 提交于
-
- 30 8月, 2022 15 次提交
-
-
由 ronnywang 提交于
* [NPU] fix pool_op, interpolate_op * fix slice_op_npu * fix test_mixed_precision_npu
-
由 HongyuJia 提交于
* add coalesce_tensor kernel * polist coalesce_tensor kernel * add sig and InferMeta * add testcase * add legacy_api.yaml * fix infermeta * fix yaml * fix kernel implementation * add compile dependency of phi/kernels * fix MetaConfig * add python api * add and fix testcase * rnn.py add import * change _C_ops.coalesce_tensor * remove useless comments * add SetBackend * restore XPU kernel temporarily * fix code according to PR comments
-
由 pangyoki 提交于
* move huber_loss xpu kernel to phi, test=kunlun * fix, test=kunlun * fix paddle_enforce, test=kunlun
-
由 zhangyikun02 提交于
-
由 WangZhen 提交于
* Adapt tensor axis for argmin/max * Add UT * Polish UT
-
由 pangyoki 提交于
* move layer_norm xpu kernel to phi, test=kunlun * fix, test=kunlun
-
由 wanghuancoder 提交于
-
由 WangZhen 提交于
* [OpAttr]Adapt tensor axis for reduce_min/max/mean/sum/prod
-
由 zyfncg 提交于
* add runtime config in phi * add runtime attr for op desc and op * fix no proto error * adjust opdesc set_attr impl * try to remove conv_op extra attrs * add init runtime attr map * change extra header path * fix runtime_attr * fix trace_op * fix bug of pass * fix merge conflict * fix dygraph attrs * fix bug of pass * fix dygraph bug * fix unittest module * delete extra attr default * fix dropout kernel * polish code * fix extra output of instance_norm * fix merge confilct * fix op_desc bug * add extra attr in yaml for conv3d_transpose * don't remove extra input and output * fix save_inference_model * fix bug of batch_norm * revert some change * polish log * polish code * add code comment Co-authored-by: NChen Weihang <chenweihang@baidu.com>
-
由 haosicheng 提交于
fix missing keep_dim variable fix missing grad check in unittest add new test case
-
由 Aurelius84 提交于
* [OpAttr]padding_value of Pad support Tensor type * fix unittest * fix unittest * fix coverage
-
由 zhoutianzi666 提交于
add constant folding pass, for some model,it will get less latency;
-
由 risemeup1 提交于
-
由 Leo Chen 提交于
* move xpu kernel to phi * delete fluid file * fix compile * add guard, test=kunlun * xpu set constant * fix xpu error, test=kunlun
-
由 WangZhen 提交于
-
- 29 8月, 2022 9 次提交
-
-
由 zhangbo9674 提交于
* add interpretercore * refine backward program id * add code * refine program * refine code * create forward/backward_program by prog2graph2prog method * test, do not care * refine code * refine code * refine code * test, do not care * add interpretorcore * add scope * refine scope create method * add jit for new_exe * solve conflict * delete unused code * polish code * polish code * refine scope in inplace * refine for datatransfer * refine _rebuild_from_desc * refine control eager deletion attr * refine used_for_jit * refine jit for infer * op size0 use ori program * polish code * refine jit * refine run_program_op ut * refine inplace * refine control * refine graph helper * refine control * refine inplace * refine buffer_share_inplace_pass * polish code * polish code * refine usage for compilerProgram * refine control * test * test core cache * refine code * refine io.py * increase test_seq2seq timeout * refine convert program * refine interpretercore_cache release * delete buildinplace * refine partial_program && io * refine code for io * test * test * test
-
由 Yuanle Liu 提交于
-
由 Qi Li 提交于
* [MLU] fix compile error, test=develop * fix more compile error, test=develop
-
由 YuanRisheng 提交于
* mv elementwise add to xpu , test=kunlun * fix ci bugs, test=kunlun * fix ci bugs , test=kunlun
-
由 Sławomir Siwek 提交于
* abs relu6 fwd * abs bwd * gaussian_random_kernel and mkldnn-onednn renaming * scale kernel * whitespace * whitespace * revert scale migration * whitespaces * revert changes to gaussian kernel * whitespaces
-
由 Weilong Wu 提交于
* [XPU] migrate mul to phi;test=kunlun * rm fluid mul xpu op;test=kunlun
-
由 Chen Weihang 提交于
* migrate assign xpu kernel, test=kunlun * remove assign_value xpu, test=kunlun
-
由 Charles-hit 提交于
* support refuse forward dygraph * modify backward api exponential__grad yaml * remove print code * 当反向复用前向时进行需不需要更高阶的反向判断,如果不需要调用c++ api,需要的话则调用前向动态图生成反向节点 * fix some backward bugs * modify the generated dygraph function name
-
由 wanghuancoder 提交于
* gather gather_grad gather_nd gaussian_random xpu to phi
-