未验证 提交 408bd8b8 编写于 作者: X xujiaqi01 提交者: GitHub

fix en doc and test=document_fix (#20441)

* fix en doc train_from_dataset and infer_from_datatset
上级 76a58197
...@@ -30,9 +30,9 @@ paddle.fluid.load_op_library (ArgSpec(args=['lib_filename'], varargs=None, keywo ...@@ -30,9 +30,9 @@ paddle.fluid.load_op_library (ArgSpec(args=['lib_filename'], varargs=None, keywo
paddle.fluid.Executor ('paddle.fluid.executor.Executor', ('document', '4d963107d87438b5add4a5288855bd04')) paddle.fluid.Executor ('paddle.fluid.executor.Executor', ('document', '4d963107d87438b5add4a5288855bd04'))
paddle.fluid.Executor.__init__ (ArgSpec(args=['self', 'place'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754')) paddle.fluid.Executor.__init__ (ArgSpec(args=['self', 'place'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.Executor.close (ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None), ('document', '90b3268b71a8aceedd0dc9e311921d15')) paddle.fluid.Executor.close (ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None), ('document', '90b3268b71a8aceedd0dc9e311921d15'))
paddle.fluid.Executor.infer_from_dataset (ArgSpec(args=['self', 'program', 'dataset', 'scope', 'thread', 'debug', 'fetch_list', 'fetch_info', 'print_period', 'fetch_handler'], varargs=None, keywords=None, defaults=(None, None, None, 0, False, None, None, 100, None)), ('document', '4ff256774ecaeee01c840a5fb5de8f7a')) paddle.fluid.Executor.infer_from_dataset (ArgSpec(args=['self', 'program', 'dataset', 'scope', 'thread', 'debug', 'fetch_list', 'fetch_info', 'print_period', 'fetch_handler'], varargs=None, keywords=None, defaults=(None, None, None, 0, False, None, None, 100, None)), ('document', '67de8ce7fbc618da50037d33cf7a7dbc'))
paddle.fluid.Executor.run (ArgSpec(args=['self', 'program', 'feed', 'fetch_list', 'feed_var_name', 'fetch_var_name', 'scope', 'return_numpy', 'use_program_cache'], varargs=None, keywords=None, defaults=(None, None, None, 'feed', 'fetch', None, True, False)), ('document', 'de3878f012e60edad05fb24fd88ce910')) paddle.fluid.Executor.run (ArgSpec(args=['self', 'program', 'feed', 'fetch_list', 'feed_var_name', 'fetch_var_name', 'scope', 'return_numpy', 'use_program_cache'], varargs=None, keywords=None, defaults=(None, None, None, 'feed', 'fetch', None, True, False)), ('document', 'de3878f012e60edad05fb24fd88ce910'))
paddle.fluid.Executor.train_from_dataset (ArgSpec(args=['self', 'program', 'dataset', 'scope', 'thread', 'debug', 'fetch_list', 'fetch_info', 'print_period', 'fetch_handler'], varargs=None, keywords=None, defaults=(None, None, None, 0, False, None, None, 100, None)), ('document', '73024c79f46b4f14f1060edeaa4919c8')) paddle.fluid.Executor.train_from_dataset (ArgSpec(args=['self', 'program', 'dataset', 'scope', 'thread', 'debug', 'fetch_list', 'fetch_info', 'print_period', 'fetch_handler'], varargs=None, keywords=None, defaults=(None, None, None, 0, False, None, None, 100, None)), ('document', 'f35879c6935d87255d4317c7d0d02ab6'))
paddle.fluid.global_scope (ArgSpec(args=[], varargs=None, keywords=None, defaults=None), ('document', 'f65788d9ead293ada47551339df12203')) paddle.fluid.global_scope (ArgSpec(args=[], varargs=None, keywords=None, defaults=None), ('document', 'f65788d9ead293ada47551339df12203'))
paddle.fluid.scope_guard (ArgSpec(args=['scope'], varargs=None, keywords=None, defaults=None), ('document', '02fcfc1eda07c03a84ed62422366239c')) paddle.fluid.scope_guard (ArgSpec(args=['scope'], varargs=None, keywords=None, defaults=None), ('document', '02fcfc1eda07c03a84ed62422366239c'))
paddle.fluid.DistributeTranspiler ('paddle.fluid.transpiler.distribute_transpiler.DistributeTranspiler', ('document', 'b2b19821c5dffcd11473d6a4eef089af')) paddle.fluid.DistributeTranspiler ('paddle.fluid.transpiler.distribute_transpiler.DistributeTranspiler', ('document', 'b2b19821c5dffcd11473d6a4eef089af'))
......
...@@ -1048,11 +1048,17 @@ class Executor(object): ...@@ -1048,11 +1048,17 @@ class Executor(object):
print_period=100, print_period=100,
fetch_handler=None): fetch_handler=None):
""" """
The document of infer_from_dataset is almost the same as Infer from a pre-defined Dataset. Dataset is defined in paddle.fluid.dataset.
train_from_dataset, except that in distributed training, Given a program, either a program or compiled program, infer_from_dataset will
push gradients will be disabled in infer_from_dataset. consume all data samples in dataset. Input scope can be given by users. By default,
infer_from_dataset() can be used for evaluation in multi-thread scope is global_scope(). The total number of thread run in training is `thread`.
very easily. Thread number used in training will be minimum value of threadnum in Dataset and
the value of thread in this interface. Debug can be set so that executor will display
Run-Time for all operators and the throughputs of current infer task.
The document of infer_from_dataset is almost the same as train_from_dataset,
except that in distributed training, push gradients will be disabled in infer_from_dataset.
infer_from_dataset() can be used for evaluation in multi-threadvery easily.
Args: Args:
program(Program|CompiledProgram): the program that needs to be run, program(Program|CompiledProgram): the program that needs to be run,
...@@ -1062,11 +1068,11 @@ class Executor(object): ...@@ -1062,11 +1068,11 @@ class Executor(object):
Please check the document of Dataset if needed. default is None Please check the document of Dataset if needed. default is None
scope(Scope): the scope used to run this program, you can switch it to different scope scope(Scope): the scope used to run this program, you can switch it to different scope
for each run. default is global_scope for each run. default is global_scope
thread(int): number of thread a user wants to run in this function. The actual number thread(int): number of thread a user wants to run in this function. Default is 0, which
of thread will be min(Dataset.thread_num, thread) if thread > 0, default is 0 means using thread num of dataset
debug(bool): whether a user wants to run infer_from_dataset, default is False debug(bool): whether a user wants to run infer_from_dataset, default is False
fetch_list(Variable List): fetch variable list, each variable fetch_list(Variable List): fetch variable list, each variable will be printed during
will be printed during training, default is None training, default is None
fetch_info(String List): print information for each variable, default is None fetch_info(String List): print information for each variable, default is None
print_period(int): the number of mini-batches for each print, default is 100 print_period(int): the number of mini-batches for each print, default is 100
fetch_handler(FetchHandler): a user define class for fetch output. fetch_handler(FetchHandler): a user define class for fetch output.
...@@ -1127,13 +1133,14 @@ class Executor(object): ...@@ -1127,13 +1133,14 @@ class Executor(object):
Please check the document of Dataset if needed. Please check the document of Dataset if needed.
scope(Scope): the scope used to run this program, you can switch it to different scope scope(Scope): the scope used to run this program, you can switch it to different scope
for each run. default is global_scope for each run. default is global_scope
thread(int): number of thread a user wants to run in this function. The actual number thread(int): number of thread a user wants to run in this function. Default is 0, which
of thread will be min(Dataset.thread_num, thread) means using thread num of dataset
debug(bool): whether a user wants to run train_from_dataset debug(bool): whether a user wants to run train_from_dataset
fetch_list(Variable List): fetch variable list, each variable fetch_list(Variable List): fetch variable list, each variable will be printed
will be printed during training during training
fetch_info(String List): print information for each variable fetch_info(String List): print information for each variable, its length should be equal
print_period(int): the number of mini-batches for each print to fetch_list
print_period(int): the number of mini-batches for each print, default is 100
fetch_handler(FetchHandler): a user define class for fetch output. fetch_handler(FetchHandler): a user define class for fetch output.
Returns: Returns:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册