提交 10cee7ed 编写于 作者: C chengduoZH

Add doc of fetch var

上级 74d1bf4a
...@@ -71,25 +71,21 @@ class DataToLoDTensorConverter(object): ...@@ -71,25 +71,21 @@ class DataToLoDTensorConverter(object):
class DataFeeder(object): class DataFeeder(object):
""" """
DataFeeder converts the data that returned by paddle.reader into a DataFeeder converts the data that returned by a reader into a data
data structure of Arguments which is defined in the API. The paddle.reader structure that can feed into Executor and ParallelExecutor. The reader
usually returns a list of mini-batch data entries. Each data entry in usually returns a list of mini-batch data entries. Each data entry in
the list is one sample. Each sample is a list or a tuple with one feature the list is one sample. Each sample is a list or a tuple with one
or multiple features. DataFeeder converts this mini-batch data entries feature or multiple features.
into Arguments in order to feed it to C++ interface.
The simple usage shows below: The simple usage shows below:
.. code-block:: python .. code-block:: python
place = fluid.CPUPlace() place = fluid.CPUPlace()
data = fluid.layers.data( img = fluid.layers.data(name='image', shape=[1, 28, 28])
name='data', shape=[1], dtype='int64', lod_level=2)
label = fluid.layers.data(name='label', shape=[1], dtype='int64') label = fluid.layers.data(name='label', shape=[1], dtype='int64')
feeder = fluid.DataFeeder([data, label], place) feeder = fluid.DataFeeder([img, label], fluid.CPUPlace())
result = feeder.feed([([0] * 784, [9]), ([1] * 784, [1])])
result = feeder.feed(
[([[1, 2, 3], [4, 5]], [1]), ([[6, 7, 8, 9]], [1])])
If you want to feed data into GPU side separately in advance when you If you want to feed data into GPU side separately in advance when you
...@@ -105,12 +101,15 @@ class DataFeeder(object): ...@@ -105,12 +101,15 @@ class DataFeeder(object):
Args: Args:
feed_list(list): The Variables or Variables'name that will feed_list(list): The Variables or Variables'name that will
feed into model. feed into model.
place(Place): fluid.CPUPlace() or fluid.CUDAPlace(i). place(Place): place indicates feed data into CPU or GPU, if you want to
feed data into GPU, please using `fluid.CUDAPlace(i)` (`i` represents
the GPU id), or if you want to feed data into CPU, please using
`fluid.CPUPlace()`.
program(Program): The Program that will feed data into, if program program(Program): The Program that will feed data into, if program
is None, it will use default_main_program(). Default None. is None, it will use default_main_program(). Default None.
Raises: Raises:
ValueError: If the some Variable is not in the Program. ValueError: If some Variable is not in this Program.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -119,7 +118,7 @@ class DataFeeder(object): ...@@ -119,7 +118,7 @@ class DataFeeder(object):
place = fluid.CPUPlace() place = fluid.CPUPlace()
feed_list = [ feed_list = [
main_program.global_block().var(var_name) for var_name in feed_vars_name main_program.global_block().var(var_name) for var_name in feed_vars_name
] ] # feed_vars_name is a list of variables' name.
feeder = fluid.DataFeeder(feed_list, place) feeder = fluid.DataFeeder(feed_list, place)
for data in reader(): for data in reader():
outs = exe.run(program=main_program, outs = exe.run(program=main_program,
...@@ -156,8 +155,8 @@ class DataFeeder(object): ...@@ -156,8 +155,8 @@ class DataFeeder(object):
def feed(self, iterable): def feed(self, iterable):
""" """
According to feed_list and iterable converter the input data According to feed_list and iterable, converters the input into
into a dictionary that can feed into Executor or ParallelExecutor. a data structure that can feed into Executor and ParallelExecutor.
Args: Args:
iterable(list|tuple): the input data. iterable(list|tuple): the input data.
...@@ -189,11 +188,11 @@ class DataFeeder(object): ...@@ -189,11 +188,11 @@ class DataFeeder(object):
def feed_parallel(self, iterable, num_places=None): def feed_parallel(self, iterable, num_places=None):
""" """
Takes multiple mini-batches. Each mini-batch will be feed on each Takes multiple mini-batches. Each mini-batch will be feed on each
device. device in advance.
Args: Args:
iterable(list|tuple): the input data. iterable(list|tuple): the input data.
num_places(int): the number of places. Default None. num_places(int): the number of devices. Default None.
Returns: Returns:
dict: the result of conversion. dict: the result of conversion.
......
...@@ -135,14 +135,18 @@ def has_fetch_operators(block, fetch_targets, fetch_holder_name): ...@@ -135,14 +135,18 @@ def has_fetch_operators(block, fetch_targets, fetch_holder_name):
def fetch_var(name, scope=None, return_numpy=True): def fetch_var(name, scope=None, return_numpy=True):
""" """
Fetch the value of the variable with the given name from the given scope Fetch the value of the variable with the given name from the
given scope.
Args: Args:
name(str): name of the variable. Typically, only persistable variables name(str): name of the variable. Typically, only persistable variables
can be found in the scope used for running the program. can be found in the scope used for running the program.
scope(core.Scope|None): scope object. It should be the scope where scope(core.Scope|None): scope object. It should be the scope where
you pass to Executor.run() when running your program. you pass to Executor.run() when running your program.
If None, global_scope() will be used. If None, global_scope() will be used. Default None.
return_numpy(bool): whether convert the tensor to numpy.ndarray return_numpy(bool): whether convert the tensor to numpy.ndarray.
Default True.
Returns: Returns:
LodTensor|numpy.ndarray LodTensor|numpy.ndarray
""" """
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册