- 13 12月, 2022 1 次提交
-
-
由 Qi Li 提交于
-
- 09 12月, 2022 1 次提交
-
-
由 PuQing 提交于
-
- 07 12月, 2022 1 次提交
-
-
由 Qi Li 提交于
-
- 25 11月, 2022 1 次提交
-
-
由 Nyakku Shigure 提交于
-
- 18 11月, 2022 1 次提交
-
-
由 Tian Zheng 提交于
* Refactor conv_kernel and conv_grad_kernel to provide interface for CUDNNv8 implementation * Fix macro * Add implementation for conv_kernel and conv_grad_kernel * Modification after rebase onto latest develop * Modify plan cache to comply with the API of phi::autotune * Refactor to reduce duplicate code * Review fix: - move functions in conv_kernel_impl_v8.h and conv_grad_kernel_impl_v8.h to conv_kernel.cu and conv_grad_kernelk.cu - add const specifier for input tensor - add logging when plans fail to execute - move CudnnConvBwdFilterV8 and CudnnConvBwdDataV8 to conv_cudnn_frontend.h * - move plan building outside of cache * Fix ROCM build
-
- 05 11月, 2022 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 02 11月, 2022 2 次提交
-
-
由 Yiqun Liu 提交于
Improve the tool for checking nan and inf, and support to compute the max, min and mean of output tensor. (#47095) * Improve the tool for checking nan and inf, and support to compute the max, min and mean of output tensor. * Add a FLAGS to control whether abort when meets inf/nan and polish codes. * Fix unittest. * Change the computing of mean.
-
由 Tian Zheng 提交于
* Add build option for CUDNN Frontend API * Fix review comments * Change namespace for cudnn_frontend.h
-
- 27 10月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* add predictor_engine * add predictor_engine * fix zero shape * fix lodTensor * fix unittest * fix code style * update CmakeList
-
- 16 9月, 2022 2 次提交
-
-
由 JingZhuangzhuang 提交于
-
由 Leo Chen 提交于
* add interpretercore for jit engine * add ut
-
- 14 9月, 2022 1 次提交
-
-
由 JingZhuangzhuang 提交于
* merge python lib * Update third_party.cmake * Update CMakeLists.txt
-
- 25 8月, 2022 1 次提交
-
-
由 hong 提交于
* optimizer conv alog speed * code polish * remove useless code * fix compile error * fix cpu compile error * not use cudnn alog t * add search cache max number * polish code * fix cache test bug * add groups data format to conv args * fix cache test bug * fix cudnn_deterministic bug * fix test switch auto tune bug * fix test swith autotune bug; * fix conv cache bug * fix cache test error * fix cache test bug * fix windows mac compile error * fix workspace search error * update cudnn cache * fix cache test bug; test=develop * fix autotune swith test error * polish code * oplish code
-
- 08 8月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Polish function code * Rename funciton to engine * Fix Log msg and doc * Rename Function to Engine and using new Function class to warp Engine * Rename EngineInfo * Adjust member variable order
-
- 01 8月, 2022 1 次提交
-
-
由 danleifeng 提交于
Co-authored-by: seemingwang <zsasuke@qq.com> Co-authored-by: NDesmonDay <908660116@qq.com> Co-authored-by: Nseemingwang <seemingwang@users.noreply.github.com> Co-authored-by: NThunderbrook <a754913769@163.com> Co-authored-by: Nxuewujiao <105861147+xuewujiao@users.noreply.github.com> Co-authored-by: Nroot <root@yq01-sys-hic-k8s-v100-box-a225-0693.yq01.baidu.com> Co-authored-by: NThunderbrook <52529258+Thunderbrook@users.noreply.github.com> Co-authored-by: Nroot <root@yq01-inf-hic-k8s-a100-ab2-0009.yq01.baidu.com> Co-authored-by: Nhuwei02 <53012141+huwei02@users.noreply.github.com> Co-authored-by: Nyaoxuefeng <yaoxuefeng@baidu.com> Co-authored-by: Nlxsbupt <luoxsbupt@163.com> Co-authored-by: Nmiaoli06 <106585574+miaoli06@users.noreply.github.com> Co-authored-by: Nroot <root@yq01-inf-hic-k8s-a100-ab2-0008.yq01.baidu.com> Co-authored-by: Nchao9527 <33347532+chao9527@users.noreply.github.com> Co-authored-by: Nqingshui <qshuihu@gmail.com> Co-authored-by: Nyangjunchao <yangjunchao@baidu.com>
-
- 29 7月, 2022 1 次提交
-
-
由 Aganlengzi 提交于
* add FLAGS_enable_api_kernel_fallback * deal with more cases * add ut for coverage
-
- 21 7月, 2022 1 次提交
-
-
由 WangZhen 提交于
* Support predictor function in JitLayer * Pybind PEFunction * Pybind PEFunction and call phi api in layer_test * Call sqrt phi API * Polish flags * Fix comments
-
- 26 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 27 5月, 2022 1 次提交
-
-
由 xiongkun 提交于
-
- 20 5月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* use fp32 compute type for cublasGemmStridedBatchedEx with fp16 input/output * add flags to control compute type * default to false * add unit test * default to true
-
- 22 4月, 2022 1 次提交
-
-
由 Ming-Xu Huang 提交于
* Fix leading dimension setting error in fused_gemm_epilogue_grad_op. * Add dyload to cuBlasLt functions. * Added cublasLtMatmulAlgoGetHeuristic to improve performance. * Added FLAGS_cublaslt_exhaustive_search_times to cublasLt epilogue * Added UTs to FLAGS_cublaslt_exhaustive_search_times * Added warmup runs in algo searching of Gemm epilogue. * Update copyright and documents. * Fixed error handling.
-
- 18 4月, 2022 1 次提交
-
-
由 TeFeng Chen 提交于
cinn_launch_op: optimize the overhead of preparing variables before executing cinn compiled program (#41777) * optimize preparation overhead before executing cinn compiled program * update code notes * fix flag annotation * add a flag of auto-tune feature beforehand
-
- 15 4月, 2022 1 次提交
-
-
由 limingshu 提交于
* change cudnn helper for auto-tune * Add FLAGS_use_autotune to set the global status of autotune and change the order of choosing algorithm. * Fix the bug in calculating and printing current step cache hit rate. * Improve the autotune cache and fix unittest. * Change the key from AlgorithmType to int64_t. * Fix unittest for cpu-only env. * change ChooseAlgoByWorkspace for heuristic mode Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
-
- 11 4月, 2022 1 次提交
-
-
由 zhouweiwei2014 提交于
-
- 09 4月, 2022 1 次提交
-
-
由 limingshu 提交于
* Using the maximum workspace_size of all alogirhms to limit the workspace size in exhaustive search mode. * Use the system cudaMalloc and cudaFree to allocate workspace during searching. * Enable switch of two kind of workspace setting methods. Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
-
- 07 4月, 2022 2 次提交
-
-
由 zhouweiwei2014 提交于
-
由 JingZhuangzhuang 提交于
* modify infer gpu memory strategy * modify infer gpu memory strategy
-
- 25 3月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* [Phi] Migrate Adam and Adamw into Phi * fix compile error and unittest ok * fix compile error and unittest ok * fix undefined reference to fLI::FLAGS * test depend on operator * fix cmake * fix xpu compile * fix infrt * fix amp_type_traits * fix amp_type_traits * modify according reviewer * modify according reviewer * fix dtype float16 * fix typo * fix Cmake * fix code style
-
- 23 2月, 2022 1 次提交
-
-
由 ShenLiang 提交于
* add processgroup_nccl
-
- 15 2月, 2022 1 次提交
-
-
由 ronnywang 提交于
* [CustomRuntime] Add DeviceManager * [CustomRuntime] Add DeviceInterface * [CustomRuntime] Add Stream, Event, DeviceGuard, CallbackManager * [CustomRuntime] Add plug-in device * [CustomRuntime] Memory module support PluggableDevice * [CustomRuntime] Add WITH_PLUGGABLE_DEVICE cmake option * update * [API] update API doc based on comments, test=develop Co-authored-by: Nqili93 <qili93@qq.com>
-
- 30 1月, 2022 1 次提交
-
-
由 Leo Chen 提交于
-
- 29 1月, 2022 1 次提交
-
-
由 Liu-xiandong 提交于
* Add XPU compiler for paddle, test=develop * clean code * clean useless code * clean useless code * clean useless code * test * add include path * use clang compiler * xpu2.cmake * XPU2 compiler passed * update * update after pten * combination the WITH_XPU and WITH_XPU2 * update the fuse operation in WITH_XPU and WITH_XPU2 * update * update * update * fix the merge error * update * update the code * update the code * add run_kp_kernel flag * update * update * fix prepared type_ bug * clean and update the code * reset the kernel_primitives * update * clean the code * delete useless comment * fix the bug in WITH_XPU * update * update * modify the abi * delete some useless code * Parameter automation in xpu compilation * Parameter automation in xpu compilation * delete kps in cmake * delete useless comment * clean the code * clean the code
-
- 18 1月, 2022 2 次提交
-
-
由 zhouweiwei2014 提交于
* change CUDA implementaion of uniform/gaussian OP * fix unittest
-
由 sneaxiy 提交于
* speedup gelu using fast math * add bwd part
-
- 30 12月, 2021 1 次提交
-
-
由 Feng Xing 提交于
This PR adds runtime flags run_kp_kernel, which choose which op to run for xpu2. There are two: dynamic linked and built from kp.
-
- 20 12月, 2021 1 次提交
-
-
由 fwenguang 提交于
-
- 03 11月, 2021 1 次提交
-
-
由 Zhen Wang 提交于
Add FLAGS_allow_cinn_ops & FLAGS_deny_cinn_ops for controlling op types used in training with CINN. (#36842) * Update UT test_parallel_executor_run_cinn.py. * Add FLAGS_allow_cinn_ops & FLAGS_deny_cinn_ops & FLAGS_cinn_ops_delim. * Use the custom StringSplit function and remove the FLAGS_cinn_ops_delim flag. * Add FlagController test. * Apply lock to the cache_ only in CinnCompiler. * Add VizGraph & ReadableKey method for CinnCompiler. * Update the dot style of VizGraph in CinnCompiler.
-
- 01 11月, 2021 1 次提交
-
-
由 Chen Weihang 提交于
* initial tensor design & sign kernel demo * add move constructor for meta & add lodtensor * add dirs & sign xpu kernel * add mean cpu&cuda kernel impl * move sign & mean xpu & npu kernel * add selected_rows basic impl * refactor design, BaseTensor to DenseTensor, etc. * add scale mkldnn kernel * polish xpu & npu impl details * fix mkldnn reuse compile failed * change tensor operation lib name * rename util filename * add more comments * change TensorImplInterface to TensorInterface * add kernel key and factory * remove MKLDNNTensorMeta, add MKLDNNDenseTensor * change XXDeviceContext to XXContext * add base kernel registrar utils & test on sign * replace boost::any by paddle::any * fix several ci failed * fix npu compile error * add ordered map util * fix multiple ordered_map compile errors * move dev into include dir * support sign op in static op run * fix static op run error * fix new executor compile failed * add dygraph branch & remove sign_op.h * fix test_infer_no_need_buffer_slots * fix rocm compile link error * fix unitybuild error & clear glog * fix npu compile failed * skip quant trans test * fix part windows compile problem * fix xpu enforce error * fix inference test failed * remove ordered_map to solve quant failed * fix part of rcom compile faild * add more register kernels * revert scale kernel temporarily * fix code format error * add new kernel registrar marco * rename top to tcmpt * revert xpu, npu, mkldnn impl & remove op def * add kernel args parse functor to auto parse args * revert some change & add scale kernels * add op proto in dygraph kernelcontext building * polish kernel dispatch logic & nameing rule * fix scale kernel match error * fix scale test failed * add mean API and unittest * test mean api success * add branch to solve compiled error * skip clang format error * add mean skip rule in op_library * add dot kernel, api and unittest (#6) * remove old kernel and add symbol link * fix dot compiled failed * add merco for module declare * fix npu and xpu compile error * revert sign, mean, scale, dot kernel removing * add comment for keeping old kernel impl * fix mutable_data error * fix bfloat16 conflit * fix inference undef error * adapt to msvc compile rules * polish comment for template inst * add cmake template instantiation for win * fix backend to place device id bug * fix ifdef error * Op2functor (#7) * add kernel args maker class * make args maker non-const * remove debug log * modify codes by review options * split constructPrKernelContext function * fix output name bug * fix test_mean_op test_sign_op failed * fill_any_like kernel refactor (#10) * fill_any_like kernel refactor * remove useless code of full_like c++ api * skip dtype for fill_any_like * add attrs for kernel key constrcut * add use_pt_kernel Flags to control whether to use pt kernel (#13) * add use_pt_kernel Flags to control whether to use pt kernel * change the default value to true for cheking pt kernels * fix mutable_data cuda place error * move high level apis into hapi * remove selectedrows adapting temporarily * Support Scalar in Tensor Compute Library (#14) * fill_any_like kernel refactor * remove useless code of full_like c++ api * Support Scalar in Tensor Compute Library * add scalar in dygraph and static graph mode * keep the basic type for attr, instead of using scalar for all * merge the code * remove mkldnn tensor & polish details * use flat_hash_map and small_vector in kernel factory * Refactor flatten kernel (#12) * refactor flatten kernel * update infershape function * fix compile bugs * fix bugs when merge * fix compiler bugs * fix bugs when run test_flatten_api * fix bugs when run test * Revert "use flat_hash_map and small_vector in kernel factory" This reverts commit 23091495cfdd3df8cc1be592d30f09ea66a7c72b. * Move cpu, cuda and other device code into kernels (#15) * fill_any_like kernel refactor * remove useless code of full_like c++ api * Support Scalar in Tensor Compute Library * add scalar in dygraph and static graph mode * keep the basic type for attr, instead of using scalar for all * merge the code * start refactor matmul * move cpu, cuda and other device modules into kernels * merge code * polish code in operator.cc * Perfect unitests (#16) * perfect unittest * update license * replace with flat_hash_map, small_vector (#19) * fix small_vector build error on windows platform * replace with flat_hash_map, small_vector * remove todo * Perfect unitests (#20) * perfect unittest * update license * fix bug when run tcmpt_utils_test * refactor execution adapting impl * fix insert conflit * Fix CI bug of test_yolov3 (#21) * fill_any_like kernel refactor * remove useless code of full_like c++ api * Support Scalar in Tensor Compute Library * add scalar in dygraph and static graph mode * keep the basic type for attr, instead of using scalar for all * merge the code * start refactor matmul * move cpu, cuda and other device modules into kernels * merge code * polish code in operator.cc * Fix CI bug of test_yolov3 * add the tensor base class, test=develop (#17) * update the tensor base class, test=develop * remove two funcs, test=develop * update the error msg, test=develop Co-authored-by: NChen Weihang <chenweihang@baidu.com> * [no-verify] commit backend and tensor signature changes * Rename tcmpt to pten (#23) * rename tcmpt to pten * update omitted files for rename to pten * update omitted file for rename to pten * remove k of all enum var * remove kernel_instantiate (#26) * remove symbols and spatial_tensor * change common to functions * readd share tensor impl methods * add a candidate dense tensor class, test=develop (#28) * change all Pt to Pten * resolve conflit with xiaowei * Op2functor opt1 (#27) * replace to small vector and change to const & * add std::move Co-authored-by: NChen Weihang <chenweihang@baidu.com> * polish kernel factory and kernel registry * fix operator test error msg mismatch * remove tensor signature and backend set member * move scalar and polish enforce * revert dtype layout change to fix error * fix enum operator override error * add several base unittests * add pten utils tests * polish some details * Dev/op2func refactor 3 (#30) * add a candidate dense tensor class, test=develop * remove TensorBase::backend(), test=develop * remove some ops, test=develop * cherry-pick the pr of tensor meta, test=develop * moves the dense tensor and some ops, test=develop * update the linalg operator, test=develop * update other operators, test=develop * fix errors, test=develop * fix bugs, test=develop * try to resolve the problem of windows ci, test=develop * updates codes, test=develop * fix the tensor_utils.cc, test=develop * modify the dense tensor, test=develop * fix the data type, test=develop Co-authored-by: Nshixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> * polish some details * polish kernel signature details * fix a bug about offsets of the tensor, test=develop (#31) Co-authored-by: Nshixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> * polish some details Co-authored-by: Nchentianyu03 <ctychentianyu@gmail.com> Co-authored-by: Nzyfncg <1370305206@qq.com> Co-authored-by: NYuanRisheng <yuanrisheng@baidu.com> Co-authored-by: N石晓伟 <39303645+Shixiaowei02@users.noreply.github.com>
-
- 24 10月, 2021 1 次提交
-
-
由 Zhen Wang 提交于
-
- 11 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
* add FLAGS_allreduce_record_one_event * add more comments * fix ut * improve coverage * fix ut, improve coverage
-