Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
b2ee8380
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
b2ee8380
编写于
4月 22, 2021
作者:
F
Feiyu Chan
提交者:
GitHub
4月 22, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add glu in nn.functional (#32096)
add glu in nn.functional
上级
e727820d
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
76 addition
and
0 deletion
+76
-0
python/paddle/fluid/nets.py
python/paddle/fluid/nets.py
+2
-0
python/paddle/fluid/tests/unittests/test_glu.py
python/paddle/fluid/tests/unittests/test_glu.py
+23
-0
python/paddle/nn/functional/__init__.py
python/paddle/nn/functional/__init__.py
+1
-0
python/paddle/nn/functional/activation.py
python/paddle/nn/functional/activation.py
+50
-0
未找到文件。
python/paddle/fluid/nets.py
浏览文件 @
b2ee8380
...
...
@@ -16,6 +16,7 @@ from __future__ import print_function
import
six
from
.
import
layers
from
.data_feeder
import
check_variable_and_dtype
,
convert_dtype
from
..utils
import
deprecated
__all__
=
[
"simple_img_conv_pool"
,
...
...
@@ -332,6 +333,7 @@ def sequence_conv_pool(input,
return
pool_out
@
deprecated
(
since
=
"2.0.0"
,
update_to
=
"paddle.nn.functional.glu"
)
def
glu
(
input
,
dim
=-
1
):
r
"""
:api_attr: Static Graph
...
...
python/paddle/fluid/tests/unittests/test_glu.py
浏览文件 @
b2ee8380
...
...
@@ -17,6 +17,9 @@ from paddle import fluid
import
paddle.fluid.dygraph
as
dg
import
unittest
import
paddle
from
paddle.nn
import
functional
as
F
def
sigmoid
(
x
):
return
1.0
/
(
1.0
+
np
.
exp
(
-
x
))
...
...
@@ -48,5 +51,25 @@ class TestGLUCase(unittest.TestCase):
self
.
check_identity
(
fluid
.
CUDAPlace
(
0
))
class
TestGLUV2
(
unittest
.
TestCase
):
def
setUp
(
self
):
self
.
x
=
np
.
random
.
randn
(
5
,
20
)
self
.
dim
=
-
1
self
.
out
=
glu
(
self
.
x
,
self
.
dim
)
def
check_identity
(
self
,
place
):
with
dg
.
guard
(
place
):
x_var
=
paddle
.
to_tensor
(
self
.
x
)
y_var
=
F
.
glu
(
x_var
,
self
.
dim
)
y_np
=
y_var
.
numpy
()
np
.
testing
.
assert_allclose
(
y_np
,
self
.
out
)
def
test_case
(
self
):
self
.
check_identity
(
fluid
.
CPUPlace
())
if
fluid
.
is_compiled_with_cuda
():
self
.
check_identity
(
fluid
.
CUDAPlace
(
0
))
if
__name__
==
'__main__'
:
unittest
.
main
()
python/paddle/nn/functional/__init__.py
浏览文件 @
b2ee8380
...
...
@@ -58,6 +58,7 @@ from .activation import tanh_ #DEFINE_ALIAS
from
.activation
import
tanhshrink
#DEFINE_ALIAS
from
.activation
import
thresholded_relu
#DEFINE_ALIAS
from
.activation
import
log_softmax
#DEFINE_ALIAS
from
.activation
import
glu
#DEFINE_ALIAS
from
.common
import
dropout
#DEFINE_ALIAS
from
.common
import
dropout2d
#DEFINE_ALIAS
from
.common
import
dropout3d
#DEFINE_ALIAS
...
...
python/paddle/nn/functional/activation.py
浏览文件 @
b2ee8380
...
...
@@ -23,6 +23,8 @@ from ...tensor.math import tanh #DEFINE_ALIAS
from
...tensor.math
import
tanh_
#DEFINE_ALIAS
from
...tensor.manipulation
import
_print_warning_in_static_mode
from
...tensor.manipulation
import
chunk
from
...tensor.math
import
multiply
__all__
=
[
'brelu'
,
...
...
@@ -53,6 +55,7 @@ __all__ = [
'tanhshrink'
,
'thresholded_relu'
,
'log_softmax'
,
'glu'
,
]
import
warnings
...
...
@@ -1276,3 +1279,50 @@ def log_softmax(x, axis=-1, dtype=None, name=None):
attrs
=
{
'axis'
:
axis
})
return
out
def
glu
(
x
,
axis
=-
1
,
name
=
None
):
r
"""
The gated linear unit. The input is evenly splited into 2 parts along a
given axis. The first part is used as the content, and the second part is
passed through a sigmoid function then used as the gate. The output is a
elementwise multiplication of the content and the gate.
.. math::
\mathrm{GLU}(a, b) = a \otimes \sigma(b)
Parameters:
x (Tensor): The input Tensor with data type float32, float64.
axis (int, optional): The axis along which split the input tensor. It
should be in range [-D, D), where D is the dimensions of ``x`` .
If ``axis`` < 0, it works the same way as :math:`axis + D` .
Default is -1.
name (str, optional): Name for the operation (optional, default is None).
For more information, please refer to :ref:`api_guide_Name`.
Returns:
A Tensor with the same data type as x. The size of the given aixs is
halved.
Examples:
.. code-block:: python
import paddle
from paddle.nn import functional as F
x = paddle.to_tensor(
[[-0.22014759, -1.76358426, 0.80566144, 0.04241343],
[-1.94900405, -1.89956081, 0.17134808, -1.11280477]]
)
print(F.glu(x).numpy())
# array([[-0.15216254, -0.9004892 ],
# [-1.0577879 , -0.46985325]], dtype=float32)
"""
check_variable_and_dtype
(
x
,
'input'
,
[
'float16'
,
'float32'
,
'float64'
],
"glu"
)
a
,
b
=
chunk
(
x
,
2
,
axis
=
axis
,
name
=
name
)
gate
=
sigmoid
(
b
,
name
=
name
)
out
=
paddle
.
multiply
(
a
,
gate
,
name
=
name
)
return
out
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录