Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleHub
提交
7240945e
P
PaddleHub
项目概览
PaddlePaddle
/
PaddleHub
大约 1 年 前同步成功
通知
282
Star
12117
Fork
2091
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
200
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleHub
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
200
Issue
200
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
7240945e
编写于
5月 12, 2020
作者:
Z
Zeyu Chen
提交者:
GitHub
5月 12, 2020
浏览文件
操作
浏览文件
下载
差异文件
Fix bug of get_embedding interface
Fix bugs
上级
abc05deb
6811e824
变更
24
显示空白变更内容
内联
并排
Showing
24 changed file
with
38 addition
and
39 deletion
+38
-39
hub_module/modules/text/semantic_model/bert_cased_L_12_H_768_A_12/README.md
.../text/semantic_model/bert_cased_L_12_H_768_A_12/README.md
+1
-1
hub_module/modules/text/semantic_model/bert_cased_L_24_H_1024_A_16/README.md
...text/semantic_model/bert_cased_L_24_H_1024_A_16/README.md
+1
-1
hub_module/modules/text/semantic_model/bert_chinese_L_12_H_768_A_12/README.md
...ext/semantic_model/bert_chinese_L_12_H_768_A_12/README.md
+1
-1
hub_module/modules/text/semantic_model/bert_multi_cased_L_12_H_768_A_12/README.md
...semantic_model/bert_multi_cased_L_12_H_768_A_12/README.md
+1
-1
hub_module/modules/text/semantic_model/bert_multi_uncased_L_12_H_768_A_12/README.md
...mantic_model/bert_multi_uncased_L_12_H_768_A_12/README.md
+1
-1
hub_module/modules/text/semantic_model/bert_uncased_L_12_H_768_A_12/README.md
...ext/semantic_model/bert_uncased_L_12_H_768_A_12/README.md
+1
-1
hub_module/modules/text/semantic_model/bert_uncased_L_24_H_1024_A_16/README.md
...xt/semantic_model/bert_uncased_L_24_H_1024_A_16/README.md
+1
-1
hub_module/modules/text/semantic_model/chinese_bert_wwm/README.md
...le/modules/text/semantic_model/chinese_bert_wwm/README.md
+2
-2
hub_module/modules/text/semantic_model/chinese_bert_wwm_ext/README.md
...odules/text/semantic_model/chinese_bert_wwm_ext/README.md
+1
-1
hub_module/modules/text/semantic_model/chinese_electra_base/README.md
...odules/text/semantic_model/chinese_electra_base/README.md
+1
-1
hub_module/modules/text/semantic_model/chinese_electra_small/README.md
...dules/text/semantic_model/chinese_electra_small/README.md
+1
-1
hub_module/modules/text/semantic_model/chinese_roberta_wwm_ext/README.md
...les/text/semantic_model/chinese_roberta_wwm_ext/README.md
+1
-1
hub_module/modules/text/semantic_model/chinese_roberta_wwm_ext_large/README.md
...xt/semantic_model/chinese_roberta_wwm_ext_large/README.md
+1
-1
hub_module/modules/text/semantic_model/ernie/README.md
hub_module/modules/text/semantic_model/ernie/README.md
+1
-1
hub_module/modules/text/semantic_model/ernie_tiny/README.md
hub_module/modules/text/semantic_model/ernie_tiny/README.md
+1
-1
hub_module/modules/text/semantic_model/ernie_v2_eng_base/README.md
...e/modules/text/semantic_model/ernie_v2_eng_base/README.md
+1
-1
hub_module/modules/text/semantic_model/ernie_v2_eng_large/README.md
.../modules/text/semantic_model/ernie_v2_eng_large/README.md
+1
-1
hub_module/modules/text/semantic_model/rbt3/README.md
hub_module/modules/text/semantic_model/rbt3/README.md
+1
-1
hub_module/modules/text/semantic_model/rbtl3/README.md
hub_module/modules/text/semantic_model/rbtl3/README.md
+1
-1
paddlehub/finetune/strategy.py
paddlehub/finetune/strategy.py
+4
-7
paddlehub/finetune/task/base_task.py
paddlehub/finetune/task/base_task.py
+7
-7
paddlehub/finetune/task/reading_comprehension_task.py
paddlehub/finetune/task/reading_comprehension_task.py
+1
-1
paddlehub/module/nlp_module.py
paddlehub/module/nlp_module.py
+5
-3
paddlehub/reader/nlp_reader.py
paddlehub/reader/nlp_reader.py
+1
-1
未找到文件。
hub_module/modules/text/semantic_model/bert_cased_L_12_H_768_A_12/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
...
...
hub_module/modules/text/semantic_model/bert_cased_L_24_H_1024_A_16/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/bert_chinese_L_12_H_768_A_12/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/bert_multi_cased_L_12_H_768_A_12/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/bert_multi_uncased_L_12_H_768_A_12/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/bert_uncased_L_12_H_768_A_12/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/bert_uncased_L_24_H_1024_A_16/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/chinese_bert_wwm/README.md
浏览文件 @
7240945e
...
...
@@ -114,7 +114,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
@@ -218,7 +218,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/chinese_bert_wwm_ext/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/chinese_electra_base/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
...
...
hub_module/modules/text/semantic_model/chinese_electra_small/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
...
...
hub_module/modules/text/semantic_model/chinese_roberta_wwm_ext/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/chinese_roberta_wwm_ext_large/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/ernie/README.md
浏览文件 @
7240945e
...
...
@@ -107,7 +107,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
利用该PaddleHub Module Fine-tune示例,可参考
[
文本分类
](
https://github.com/PaddlePaddle/PaddleHub/tree/release/v1.2/demo/text-classification
)
、
[
序列标注
](
https://github.com/PaddlePaddle/PaddleHub/tree/release/v1.2/demo/sequence-labeling
)
。
...
...
hub_module/modules/text/semantic_model/ernie_tiny/README.md
浏览文件 @
7240945e
...
...
@@ -94,7 +94,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
利用该PaddleHub Module Fine-tune示例,可参考
[
文本分类
](
https://github.com/PaddlePaddle/PaddleHub/tree/release/v1.4.0/demo/text-classification
)
。
...
...
hub_module/modules/text/semantic_model/ernie_v2_eng_base/README.md
浏览文件 @
7240945e
...
...
@@ -100,7 +100,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
利用该PaddleHub Module Fine-tune示例,可参考
[
文本分类
](
https://github.com/PaddlePaddle/PaddleHub/tree/release/v1.4.0/demo/text-classification
)
。
...
...
hub_module/modules/text/semantic_model/ernie_v2_eng_large/README.md
浏览文件 @
7240945e
...
...
@@ -103,7 +103,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
利用该PaddleHub Module Fine-tune示例,可参考
[
文本分类
](
https://github.com/PaddlePaddle/PaddleHub/tree/release/v1.2/demo/text-classification
)
。
...
...
hub_module/modules/text/semantic_model/rbt3/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
hub_module/modules/text/semantic_model/rbtl3/README.md
浏览文件 @
7240945e
...
...
@@ -96,7 +96,7 @@ embedding_result = module.get_embedding(texts=[["Sample1_text_a"],["Sample2_text
# Use "get_params_layer" to get params layer and used to ULMFiTStrategy.
params_layer
=
module
.
get_params_layer
()
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
params_layer
=
params_layer
)
strategy
=
hub
.
finetune
.
strategy
.
ULMFiTStrategy
(
frz_params_layer
=
params_layer
,
dis_
params_layer
=
params_layer
)
```
## 查看代码
...
...
paddlehub/finetune/strategy.py
浏览文件 @
7240945e
...
...
@@ -511,10 +511,6 @@ class CombinedStrategy(DefaultStrategy):
unfreeze_depths
=
self
.
sorted_depth
[:
self
.
max_depth
*
self
.
epoch
//
self
.
scheduler
[
"gradual_unfreeze"
][
"blocks"
]])
else
:
logger
.
warning
(
"The max op-depth in the network is %s. That results in that can't use the gradual unfreeze finetune strategy."
%
(
self
.
max_depth
))
elif
self
.
scheduler
[
"gradual_unfreeze"
][
"params_layer"
]:
max_layer
=
max
(
self
.
scheduler
[
"gradual_unfreeze"
][
"params_layer"
].
values
())
...
...
@@ -631,8 +627,9 @@ class ULMFiTStrategy(CombinedStrategy):
ratio
=
32
,
dis_blocks
=
3
,
factor
=
2.6
,
dis_params_layer
=
None
,
frz_blocks
=
3
,
params_layer
=
None
):
frz_
params_layer
=
None
):
scheduler
=
{
"slanted_triangle"
:
{
...
...
@@ -641,12 +638,12 @@ class ULMFiTStrategy(CombinedStrategy):
},
"gradual_unfreeze"
:
{
"blocks"
:
frz_blocks
,
"params_layer"
:
params_layer
"params_layer"
:
frz_
params_layer
},
"discriminative"
:
{
"blocks"
:
dis_blocks
,
"factor"
:
factor
,
"params_layer"
:
params_layer
"params_layer"
:
dis_
params_layer
}
}
regularization
=
{}
...
...
paddlehub/finetune/task/base_task.py
浏览文件 @
7240945e
...
...
@@ -36,7 +36,7 @@ from visualdl import LogWriter
import
paddlehub
as
hub
from
paddlehub.common.paddle_helper
import
dtype_map
,
clone_program
from
paddlehub.common.utils
import
mkdir
,
version_compare
from
paddlehub.common.utils
import
mkdir
from
paddlehub.common.dir
import
tmp_dir
from
paddlehub.common.logger
import
logger
from
paddlehub.finetune.checkpoint
import
load_checkpoint
,
save_checkpoint
...
...
@@ -951,12 +951,6 @@ class BaseTask(object):
Returns:
RunState: the running result of predict phase
"""
if
isinstance
(
self
.
_base_data_reader
,
hub
.
reader
.
LACClassifyReader
):
raise
Exception
(
"LACClassifyReader does not support predictor, please close accelerate_mode"
)
global_run_states
=
[]
period_run_states
=
[]
...
...
@@ -998,6 +992,12 @@ class BaseTask(object):
Returns:
RunState: the running result of predict phase
"""
if
accelerate_mode
and
isinstance
(
self
.
_base_data_reader
,
hub
.
reader
.
LACClassifyReader
):
logger
.
warning
(
"LACClassifyReader does not support predictor, the accelerate_mode is closed now."
)
accelerate_mode
=
False
self
.
accelerate_mode
=
accelerate_mode
with
self
.
phase_guard
(
phase
=
"predict"
):
...
...
paddlehub/finetune/task/reading_comprehension_task.py
浏览文件 @
7240945e
...
...
@@ -205,7 +205,7 @@ def get_predictions(all_examples, all_features, all_results, n_best_size,
for
(
feature_index
,
feature
)
in
enumerate
(
features
):
if
feature
.
unique_id
not
in
unique_id_to_result
:
logger
.
info
(
"As using
pyreader
, the last one batch is so small that the feature %s in the last batch is discarded "
"As using
multidevice
, the last one batch is so small that the feature %s in the last batch is discarded "
%
feature
.
unique_id
)
continue
result
=
unique_id_to_result
[
feature
.
unique_id
]
...
...
paddlehub/module/nlp_module.py
浏览文件 @
7240945e
...
...
@@ -397,7 +397,8 @@ class TransformerModule(NLPBaseModule):
return
inputs
,
outputs
,
module_program
def
get_embedding
(
self
,
texts
,
use_gpu
=
False
,
batch_size
=
1
):
def
get_embedding
(
self
,
texts
,
max_seq_len
=
512
,
use_gpu
=
False
,
batch_size
=
1
):
"""
get pooled_output and sequence_output for input texts.
Warnings: this method depends on Paddle Inference Library, it may not work properly in PaddlePaddle <= 1.6.2.
...
...
@@ -405,6 +406,7 @@ class TransformerModule(NLPBaseModule):
Args:
texts (list): each element is a text sample, each sample include text_a and text_b where text_b can be omitted.
for example: [[sample0_text_a, sample0_text_b], [sample1_text_a, sample1_text_b], ...]
max_seq_len (int): the max sequence length.
use_gpu (bool): use gpu or not, default False.
batch_size (int): the data batch size, default 1.
...
...
@@ -417,12 +419,12 @@ class TransformerModule(NLPBaseModule):
)
or
self
.
emb_job
[
"batch_size"
]
!=
batch_size
or
self
.
emb_job
[
"use_gpu"
]
!=
use_gpu
:
inputs
,
outputs
,
program
=
self
.
context
(
trainable
=
True
,
max_seq_len
=
self
.
MAX_SEQ_LEN
)
trainable
=
True
,
max_seq_len
=
max_seq_len
)
reader
=
hub
.
reader
.
ClassifyReader
(
dataset
=
None
,
vocab_path
=
self
.
get_vocab_path
(),
max_seq_len
=
self
.
MAX_SEQ_LEN
,
max_seq_len
=
max_seq_len
,
sp_model_path
=
self
.
get_spm_path
()
if
hasattr
(
self
,
"get_spm_path"
)
else
None
,
word_dict_path
=
self
.
get_word_dict_path
()
if
hasattr
(
...
...
paddlehub/reader/nlp_reader.py
浏览文件 @
7240945e
...
...
@@ -1113,7 +1113,7 @@ class LACClassifyReader(BaseReader):
return
processed
if
not
self
.
has_processed
[
phase
]:
if
not
self
.
has_processed
[
phase
]
or
phase
==
"predict"
:
logger
.
info
(
"processing %s data now... this may take a few minutes"
%
phase
)
for
i
in
range
(
len
(
data
)):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录