Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
fc429fea
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
fc429fea
编写于
10月 15, 2021
作者:
Y
Yanxing Shi
提交者:
GitHub
10月 15, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[cherry-pick] add sparse_embedding doc (#36312)
* add sparse_embedding doc * modify sample code * fix sample code error
上级
976f0146
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
110 addition
and
1 deletion
+110
-1
python/paddle/fluid/contrib/layers/nn.py
python/paddle/fluid/contrib/layers/nn.py
+110
-1
未找到文件。
python/paddle/fluid/contrib/layers/nn.py
浏览文件 @
fc429fea
...
...
@@ -971,12 +971,121 @@ def sparse_embedding(input,
table_class
=
"CommonSparseTable"
,
param_attr
=
None
,
dtype
=
'float32'
):
r
"""
:api_attr: Static Graph
The OP is used as the operator of the Embedding Lookup layer in the large-scale
sparse training of the parameter server mode, instead of using the paddle.nn.functional.embedding.
The operator is used to lookup embeddings vector of ids provided by :attr:`input` .
It automatically constructs a 2D embedding matrix based on the input :attr:`size`
(vocab_size, emb_size) and :attr:`dtype` .
The shape of output Tensor is generated by appending an emb_size dimension to the
last dimension of the input Tensor shape.
**Note:** The id in :attr:`input` must satisfy :math:`0 =< id < size[0]` , otherwise
the program will throw an exception and exit.
.. code-block:: text
Case 1:
input is a Tensor. padding_idx = -1
input.data = [[1, 3], [2, 4], [4, 127]]
input.shape = [3, 2]
Given size = [128, 16]
output is a Tensor:
out.shape = [3, 2, 16]
out.data = [[[0.129435295, 0.244512452, ..., 0.436322452],
[0.345421456, 0.524563927, ..., 0.144534654]],
[[0.345249859, 0.124939536, ..., 0.194353745],
[0.945345345, 0.435394634, ..., 0.435345365]],
[[0.945345345, 0.435394634, ..., 0.435345365],
[0.0, 0.0, ..., 0.0 ]]] # padding data
The input padding_idx is less than 0, it is automatically converted to padding_idx = -1 + 128 = 127
It will pad all-zero data when ids is 127.
Case 2:
input is a LoDTensor with 1-level LoD. padding_idx = 0
input.lod = [[2, 3]]
input.data = [[1], [3], [2], [4], [0]]
input.shape = [5, 1]
Given size = [128, 16]
output is a LoDTensor:
out.lod = [[2, 3]]
out.shape = [5, 1, 16]
out.data = [[[0.129435295, 0.244512452, ..., 0.436322452]],
[[0.345421456, 0.524563927, ..., 0.144534654]],
[[0.345249859, 0.124939536, ..., 0.194353745]],
[[0.945345345, 0.435394634, ..., 0.435345365]],
[[0.0, 0.0, ..., 0.0 ]]] # padding data
It will pad all-zero data when ids is 0.
Args:
input(Variable): A Tensor or LoDTensor with type int64, which contains the id
information. The value of the input id should satisfy :math:`0<= id < size[0]` .
size(tuple|list): The shape of lookup table parameter (vocab_size, emb_size). It
should have two elements which indicates the size of the dictionary of embeddings
and the size of each embedding vector respectively. The initial parameter size
is 0 in the large-scale sparse scenario, which will gradually expand with the
training. So if vocab_size is temporarily useless, its value can be any integer.
The emb_size is the dimensional configuration of the word embedding weight parameter.
padding_idx(int|long|None, optional): padding_idx needs to be in the interval [-vocab_size, vocab_size).
If :math:`padding\_idx < 0`, the :math:`padding\_idx` will automatically be converted
to :math:`vocab\_size + padding\_idx` . It will output all-zero padding data whenever
lookup encounters :math:`padding\_idx` in id. And the padding data will not be updated
while training. If set None, it makes no efe mfect to output. Default: None.
is_test(bool, optional): Training or prediction mode. In prediction mode (is_test=False),
the output is not initialized and created, and it is filled with 0 and returned. Default: False.
entry(str, optional): Entry config with parameter server whose value is ProbabilityEntry,
CountFilterEntry or None. Default: None.
table_class(str, optional): The type of the sparse table. The value can be CommonSparseTable
or SSDSparseTable. The default is CommonSparseTable.
param_attr(ParamAttr, optional): To specify the weight parameter property. Default: None, which means the
default weight parameter property is used. In addition, user-defined or pre-trained word
vectors can be loaded with the :attr:`param_attr` parameter. The local word vector needs
to be transformed into numpy format, and the shape of local word vector should be consistent
with :attr:`size` .
dtype(str|core.VarDesc.VarType): It refers to the data type of output Tensor. It must be float32 or
float64. Default: float32.
Returns:
Variable: Embedding Tensor or LoDTensor mapped by input. The data type is the same as :attr:`dtype` .
Examples:
.. code-block:: python
import paddle
paddle.enable_static()
sparse_feature_dim = 1024
embedding_size = 64
# Only when the feature appear more than 10 times or more will be participated in the training.
entry = paddle.distributed.CountFilterEntry(10)
input = paddle.static.data(name='ins', shape=[1], dtype='int64')
emb = paddle.static.nn.sparse_embedding(
input=input,
size=[sparse_feature_dim, embedding_size],
is_test=False,
entry=entry,
param_attr=paddle.ParamAttr(name="SparseFeatFactors",
initializer=paddle.nn.initializer.Uniform()))
"""
helper
=
LayerHelper
(
'sparse_embedding'
,
**
locals
())
check_variable_and_dtype
(
input
,
'input'
,
[
'int64'
],
'fluid.contrib.layers.sparse_embedding'
)
check_dtype
(
dtype
,
'dtype'
,
[
'float32'
],
check_dtype
(
dtype
,
'dtype'
,
[
'float32'
,
'float64'
],
'paddle.static.nn.sparse_embedding'
)
w
=
helper
.
create_parameter
(
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录