未验证 提交 e3217e3e 编写于 作者: V Vvsmile 提交者: GitHub

[AMP OP&Test]Modify the FP16 and BF16 OpTest of Add_N (#52311)

* adjust defalut tolerance of output and grad

* fix a bug in the grad of OpTest

* fix the type of setting defalut value in optest, both forward and
backward

* add defalut

* fix test_sum_op

* fix test_sum_op test for testing add_n

* modify the add_n op_test
上级 e16eb22c
...@@ -297,14 +297,14 @@ class TestFP16SumOp(TestSumOp): ...@@ -297,14 +297,14 @@ class TestFP16SumOp(TestSumOp):
def test_check_output(self): def test_check_output(self):
place = core.CUDAPlace(0) place = core.CUDAPlace(0)
if core.is_float16_supported(place): if core.is_float16_supported(place):
self.check_output_with_place(place, atol=2e-2) self.check_output_with_place(place)
# FIXME: Because of the precision fp16, max_relative_error # FIXME: Because of the precision fp16, max_relative_error
# should be 0.15 here. # should be 0.15 here.
def test_check_grad(self): def test_check_grad(self):
place = core.CUDAPlace(0) place = core.CUDAPlace(0)
if core.is_float16_supported(place): if core.is_float16_supported(place):
self.check_grad(['x0'], 'Out', max_relative_error=0.15) self.check_grad(['x0'], 'Out')
def create_test_sum_fp16_class(parent): def create_test_sum_fp16_class(parent):
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册