- 26 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 18 5月, 2022 1 次提交
-
-
由 Sławomir Siwek 提交于
* matmul refactor * remove UT which only check ENFORCE output * code format * improve memory usage
-
- 09 5月, 2022 1 次提交
-
-
由 Jacek Czaja 提交于
* - fix to crash - more fixes - added diagnostic - matmul output fixes. - compilation fix - stop rotating too small shapes * - Added enabling of matmul_V2 onednn test
-
- 20 2月, 2022 1 次提交
-
-
由 Chen Weihang 提交于
* rename pten dir to phi * rename namespace to phi * rename infrt pten dir to phi * resolve conflict * rename pten to phi in cmake * revert all infrt change * change needed files * fix infrt failed * fix inference failed
-
- 19 2月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* Unify paddle/pten::framework::ddim into pten::ddim * fix paddle namespace * compile sucessfully * fix npu src file * fix conflict * fix conflict * fix tensorrt compiler error * fix conflict * fix conflict * fix tesst file conflict * fix conflict * fix mlu file conflict * fix mlu file conflict * fix cinn header file conflict * fix conflict * fix conflict * fix conflict * fix conflict
-
- 18 2月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move blas related files * move lapack related files
-
- 15 2月, 2022 1 次提交
-
-
由 Aurelius84 提交于
* #1 migrate dist-related type()-> dtype() * move datatype function from pten -> fluid/framework * change type() in imperative into convert(dtype()) * modify xx_tensor->type into xx_tensor->dtype * change the set_type interface and the caller * modify xx_tensor.type into xx_tensor.dtype * fix mutable_data(place, dtype()) * change caller of mutable_data in pten and distributed * change the caller of mutable_data in fluid/framework * change the caller of mutable_data in imperative directory * mutable_data: inference * update the call of mutable_data * transfer MakePenScalarArray MakePtenScalar ResetHolderWithType * pass the compile. the next step is remove VarType in Pten * fix all and remove VarType from pten. success in linux. Next task is other platform * fix conflict with develop * fix compiled error * Fix reset conversion * fix conflict * fix compiled problem * fix typo * Fix << in tensor_utils.cc * fix type->dtype * fix unittest * fix tensor init constructor * fix DataTypeSize for BFloat16 * fix code style * fix npu compiled error * fix npu * compile npu sucessfully * fix conflict * fix conflict Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
-
- 11 2月, 2022 1 次提交
-
-
由 Feiyu Chan 提交于
* move operators/math/math_function_* to pten/kernels/func * namespace from `paddle::operators::math` to `pten::funcs`
-
- 27 12月, 2021 1 次提交
-
-
由 baoachun 提交于
-
- 29 11月, 2021 1 次提交
-
-
由 piotrekobiIntel 提交于
-
- 27 10月, 2021 1 次提交
-
-
由 baoachun 提交于
* fix matmul dim error * fix wrong dim check in matmul
-
- 15 9月, 2021 1 次提交
-
-
由 jakpiase 提交于
-
- 06 9月, 2021 1 次提交
-
-
由 wawltor 提交于
* Add the extra flag for the some ops * fix the compile problem in matmul extra
-
- 18 8月, 2021 1 次提交
-
-
由 wawltor 提交于
-
- 12 7月, 2021 1 次提交
-
-
由 WeiXin 提交于
-
- 22 5月, 2021 1 次提交
-
-
由 jakpiase 提交于
* added support for most matmul cases * added more functionality * full functionality of matmul op, fp32 only * added bf16 tests and functionality * added formatting * changes after review * minor change * added reviewers suggestions
-
- 25 3月, 2021 1 次提交
-
-
由 Chen Weihang 提交于
* polish two error messages * polish details
-
- 02 3月, 2021 1 次提交
-
-
由 Qi Li 提交于
-
- 25 1月, 2021 1 次提交
-
-
由 arlesniak 提交于
* More precise mkldnn kernel choice in GetExpectedKernelType * Fixes after review * Refresh develop for CI * CI experiment * get back from CI exper
-
- 30 12月, 2020 1 次提交
-
-
由 wawltor 提交于
* add the support the op version check for matmul, test=op_version
-
- 04 12月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* basic impl of type promote * add comment & another testcase * fix complex bugs & support python op promote type * fix failed unittests & polish code * add unittest for coverage * change to only promote complex type * polish code details * polish several comments
-
- 27 11月, 2020 1 次提交
-
-
由 arlesniak 提交于
-
- 10 10月, 2020 1 次提交
-
-
由 wangxinxin08 提交于
* add matmul doublegrad op * fix compile errors * modify code according to review * delete float16
-
- 08 8月, 2020 1 次提交
-
-
由 joanna.wozna.intel 提交于
* Change use_quantizer attribute name and data type * Fix problem with setting attribute * Add changes due to review * Small change in function * Restore use_quantizer attr for compatibility
-
- 28 4月, 2020 1 次提交
-
-
由 Sylwester Fraczek 提交于
-
- 24 4月, 2020 1 次提交
-
-
由 arlesniak 提交于
-
- 17 4月, 2020 1 次提交
-
-
由 Zhong Hui 提交于
* fix error message of l2_normalize, matmul, mean, etc. * add the test case for those ops
-
- 11 4月, 2020 1 次提交
-
-
由 Michał Gallus 提交于
* Initial FP32 DNNL MatMul Implementation * Implement int8 DNNL MatMul * Unify in-kernel-naming, clean UTs * MatmuL: Introduce op caching * Final adjustments test=develop * Remove dy_graph disablement test=develop * Change dnnl header name to new one test=develop * Contrain multi head check to prevent fails test=develop * Resolve dnnl header problems on MAC CI * Variable namings to kernel and skip_grad_ci added test=develop * Prevent MAC CI from failing * Prevent windows build from failing test=develop * Modify UTs to conform to the rules * Modify MatMul aux functions namings test=develop
-
- 04 4月, 2020 1 次提交
-
-
由 Chen Weihang 提交于
* delete invalid check inferface Ref & VectorRef, test=develop * fix vector ref delete error, test=develop * try the new check inferface, test=develop * change all related code with new check macro, test=develop * remove static assert, test=develop * polish detail, test=develop * skip coverage problem, test=develop * add new check macro, test=develop
-
- 11 3月, 2020 1 次提交
-
-
由 wawltor 提交于
In the op of gemm, we use the gemm to replace batch gemm, speed up the matmul op
-
- 09 3月, 2020 1 次提交
-
-
由 Zeng Jinle 提交于
* refine grad maker, test=develop * refactor tracer stage 1, test=develop * merge develop to solve conflict third times, test=develop
-
- 25 12月, 2019 1 次提交
-
-
由 hong 提交于
-
- 31 10月, 2019 1 次提交
-
-
由 hong 提交于
* refactor dygraph,test=develop * fix failed unittest,test=develop * polish code,test=develop * check windows ci error,test=develop try to fix windows ci error by np.allclose,test=develop * polish vlog and profiler, test=develop * try to fix preceding ops order,test=develop * test transformer in windows ci, test=develop * use python c-api to speed up tracer.trace,test=develop * test=develop, fix docker with paddle nccl problem * test=develop, add ut for debug string and gradient_accumulator * test=develop, add tests for layer/gradient_accumulator/prepared_op * test=develop, fix complie error for test_prepared_op * test=develop, add more ut for dygraph * test=develop, create API.spec for dygraph api change * optimize grad maker; test=develop * optimize grad maker * test * grad make optim; test=develop * fix unittest bugs; test=develop * add dygraph grad op maker and split_op * grad op maker refactor; test=develop * add dygraph grad maker; test=develop * fix op deformable_conv_v1_op bug; test=develop * fix deformable_conv prroi pool bugs; * fix new op grad op maker bug; test=develop * fix split by ref bug; test=develop * fix dygraph auto prune bug; test=develop * fix test_trace bug; test=develop * fix fused emb seq pool bug; test=develop * remove useless code in op_desc file; test=develop * remove useless code, StrVarBaseNode; test=develop * fix review issues; test=develop * fix rank_loss grad maker; test=develop * remove flag in VarBase; test=develop * fix distributed_notify_op compile bug ; test=develop * fix reshape op double grad; test=develop * fix expand as op; test=develop * add impertive type_defs.h for demo_train; test=develop * fix inference lib cmake; test=develop * fix inference lib; test=develop * fix infernce_lib; test=develop * fix inference cmake; test=develop * fix inference lib; test=develop * fix inference lib; test=develop * remove condition dygraph grad maker, modify local name; test=develop * fix split grad maker bug; test=develop * fix pyramid_op bug; test=develop * change travis time out limit; test=develop * restore travis; test=develop * change timeout limit; test=develop
-
- 23 10月, 2019 1 次提交
-
-
由 石晓伟 提交于
* update the infer shape of matmul, test=release/1.6 * add unittests of matmul, test=release/1.6 * change func names, test=develop
-
- 15 10月, 2019 1 次提交
-
-
由 石晓伟 提交于
* add data type check, test=develop * polish error messages, test=develop * polish error messages, test=develop * Remove support for the CPU architecture matmul, test=develop * fix syntax bug, test=develop
-
- 25 9月, 2019 1 次提交
-
-
由 Bob Zhu 提交于
* add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * refactor the code of matmul with multiple head even different width and height test=develop
-
- 24 7月, 2019 1 次提交
-
-
由 Bob Zhu 提交于
* extend matmul op to support multiple head multiplication With the support of multiple head, the multiplication of two big matrixes is split into multiplication of several (head_number) small matrixes. e.g. if Mat A is [3, 24] and Mat B is [24, 4], when multiple A and B with head_number as 4, Mat A will be split as 4 matrix of [3, 6] and Mat B will be 4 matrix of [6, 4]. The result of final matrix will be 4 matrix of [3, 4], i.e. [3, 16].
-
- 21 3月, 2019 1 次提交
-
-
由 phlrain 提交于
-
- 18 9月, 2018 1 次提交
-
-
由 sneaxiy 提交于
-