未验证 提交 3328a3d5 编写于 作者: S sneaxiy 提交者: GitHub

fix fused linear bfloat16 (#51384)

上级 2c7cbb86
...@@ -130,5 +130,6 @@ REGISTER_OP_CUDA_KERNEL( ...@@ -130,5 +130,6 @@ REGISTER_OP_CUDA_KERNEL(
ops::FusedGemmEpilogueGradKernel<phi::GPUContext, double>, ops::FusedGemmEpilogueGradKernel<phi::GPUContext, double>,
ops::FusedGemmEpilogueGradKernel<phi::GPUContext, ops::FusedGemmEpilogueGradKernel<phi::GPUContext,
paddle::platform::float16>, paddle::platform::float16>,
ops::FusedGemmEpilogueKernel<phi::GPUContext, paddle::platform::bfloat16>); ops::FusedGemmEpilogueGradKernel<phi::GPUContext,
paddle::platform::bfloat16>);
#endif #endif
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册