Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleHub
提交
7b1fc7c8
P
PaddleHub
项目概览
PaddlePaddle
/
PaddleHub
1 年多 前同步成功
通知
283
Star
12117
Fork
2091
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
200
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleHub
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
200
Issue
200
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
7b1fc7c8
编写于
6月 10, 2019
作者:
W
wuzewu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix bugs in parameter parsing
上级
84692191
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
22 addition
and
20 deletion
+22
-20
demo/image-classification/img_classifier.py
demo/image-classification/img_classifier.py
+9
-8
demo/image-classification/predict.py
demo/image-classification/predict.py
+7
-6
demo/sequence-labeling/predict.py
demo/sequence-labeling/predict.py
+1
-1
demo/sequence-labeling/sequence_label.py
demo/sequence-labeling/sequence_label.py
+2
-2
demo/text-classification/predict.py
demo/text-classification/predict.py
+1
-1
demo/text-classification/text_classifier.py
demo/text-classification/text_classifier.py
+2
-2
未找到文件。
demo/image-classification/img_classifier.py
浏览文件 @
7b1fc7c8
#coding:utf-8
import
argparse
import
os
import
ast
import
paddle.fluid
as
fluid
import
paddlehub
as
hub
...
...
@@ -8,14 +9,14 @@ import numpy as np
# yapf: disable
parser
=
argparse
.
ArgumentParser
(
__doc__
)
parser
.
add_argument
(
"--num_epoch"
,
type
=
int
,
default
=
1
,
help
=
"Number of epoches for fine-tuning."
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use GPU for fine-tuning."
)
parser
.
add_argument
(
"--checkpoint_dir"
,
type
=
str
,
default
=
"paddlehub_finetune_ckpt"
,
help
=
"Path to save log data."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
16
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--module"
,
type
=
str
,
default
=
"resnet50"
,
help
=
"Module used as feature extractor."
)
parser
.
add_argument
(
"--dataset"
,
type
=
str
,
default
=
"flowers"
,
help
=
"Dataset to finetune."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_data_parallel"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use data parallel."
)
parser
.
add_argument
(
"--num_epoch"
,
type
=
int
,
default
=
1
,
help
=
"Number of epoches for fine-tuning."
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use GPU for fine-tuning."
)
parser
.
add_argument
(
"--checkpoint_dir"
,
type
=
str
,
default
=
"paddlehub_finetune_ckpt"
,
help
=
"Path to save log data."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
16
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--module"
,
type
=
str
,
default
=
"resnet50"
,
help
=
"Module used as feature extractor."
)
parser
.
add_argument
(
"--dataset"
,
type
=
str
,
default
=
"flowers"
,
help
=
"Dataset to finetune."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_data_parallel"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use data parallel."
)
# yapf: enable.
module_map
=
{
...
...
demo/image-classification/predict.py
浏览文件 @
7b1fc7c8
#coding:utf-8
import
argparse
import
os
import
ast
import
paddle.fluid
as
fluid
import
paddlehub
as
hub
...
...
@@ -8,12 +9,12 @@ import numpy as np
# yapf: disable
parser
=
argparse
.
ArgumentParser
(
__doc__
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use GPU for predict."
)
parser
.
add_argument
(
"--checkpoint_dir"
,
type
=
str
,
default
=
"paddlehub_finetune_ckpt"
,
help
=
"Path to save log data."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
16
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--module"
,
type
=
str
,
default
=
"resnet50"
,
help
=
"Module used as a feature extractor."
)
parser
.
add_argument
(
"--dataset"
,
type
=
str
,
default
=
"flowers"
,
help
=
"Dataset to finetune."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use GPU for predict."
)
parser
.
add_argument
(
"--checkpoint_dir"
,
type
=
str
,
default
=
"paddlehub_finetune_ckpt"
,
help
=
"Path to save log data."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
16
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--module"
,
type
=
str
,
default
=
"resnet50"
,
help
=
"Module used as a feature extractor."
)
parser
.
add_argument
(
"--dataset"
,
type
=
str
,
default
=
"flowers"
,
help
=
"Dataset to finetune."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
# yapf: enable.
module_map
=
{
...
...
demo/sequence-labeling/predict.py
浏览文件 @
7b1fc7c8
...
...
@@ -35,7 +35,7 @@ parser.add_argument("--checkpoint_dir", type=str, default=None, help="Directory
parser
.
add_argument
(
"--max_seq_len"
,
type
=
int
,
default
=
512
,
help
=
"Number of words of the longest seqence."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
1
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
ast
.
literal_eval
,
default
=
False
,
help
=
"Whether use GPU for finetuning, input should be True or False"
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
args
=
parser
.
parse_args
()
# yapf: enable.
...
...
demo/sequence-labeling/sequence_label.py
浏览文件 @
7b1fc7c8
...
...
@@ -30,8 +30,8 @@ parser.add_argument("--warmup_proportion", type=float, default=0.0, help="Warmup
parser
.
add_argument
(
"--max_seq_len"
,
type
=
int
,
default
=
512
,
help
=
"Number of words of the longest seqence."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
32
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--checkpoint_dir"
,
type
=
str
,
default
=
None
,
help
=
"Directory to model checkpoint"
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_data_parallel"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use data parallel."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_data_parallel"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use data parallel."
)
args
=
parser
.
parse_args
()
# yapf: enable.
...
...
demo/text-classification/predict.py
浏览文件 @
7b1fc7c8
...
...
@@ -34,7 +34,7 @@ parser.add_argument("--checkpoint_dir", type=str, default=None, help="Directory
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
1
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--max_seq_len"
,
type
=
int
,
default
=
512
,
help
=
"Number of words of the longest seqence."
)
parser
.
add_argument
(
"--use_gpu"
,
type
=
ast
.
literal_eval
,
default
=
False
,
help
=
"Whether use GPU for finetuning, input should be True or False"
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
args
=
parser
.
parse_args
()
# yapf: enable.
...
...
demo/text-classification/text_classifier.py
浏览文件 @
7b1fc7c8
...
...
@@ -32,8 +32,8 @@ parser.add_argument("--data_dir", type=str, default=None, help="Path to training
parser
.
add_argument
(
"--checkpoint_dir"
,
type
=
str
,
default
=
None
,
help
=
"Directory to model checkpoint"
)
parser
.
add_argument
(
"--max_seq_len"
,
type
=
int
,
default
=
512
,
help
=
"Number of words of the longest seqence."
)
parser
.
add_argument
(
"--batch_size"
,
type
=
int
,
default
=
32
,
help
=
"Total examples' number in batch for training."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_data_parallel"
,
type
=
boo
l
,
default
=
False
,
help
=
"Whether use data parallel."
)
parser
.
add_argument
(
"--use_pyreader"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use pyreader to feed data."
)
parser
.
add_argument
(
"--use_data_parallel"
,
type
=
ast
.
literal_eva
l
,
default
=
False
,
help
=
"Whether use data parallel."
)
args
=
parser
.
parse_args
()
# yapf: enable.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录