Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
10cee7ed
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
10cee7ed
编写于
6月 20, 2018
作者:
C
chengduoZH
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add doc of fetch var
上级
74d1bf4a
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
24 addition
and
21 deletion
+24
-21
python/paddle/fluid/data_feeder.py
python/paddle/fluid/data_feeder.py
+17
-18
python/paddle/fluid/executor.py
python/paddle/fluid/executor.py
+7
-3
未找到文件。
python/paddle/fluid/data_feeder.py
浏览文件 @
10cee7ed
...
...
@@ -71,25 +71,21 @@ class DataToLoDTensorConverter(object):
class
DataFeeder
(
object
):
"""
DataFeeder converts the data that returned by
paddle.reader into
a
data structure of Arguments which is defined in the API. The paddle.
reader
DataFeeder converts the data that returned by
a reader into a dat
a
structure that can feed into Executor and ParallelExecutor. The
reader
usually returns a list of mini-batch data entries. Each data entry in
the list is one sample. Each sample is a list or a tuple with one feature
or multiple features. DataFeeder converts this mini-batch data entries
into Arguments in order to feed it to C++ interface.
the list is one sample. Each sample is a list or a tuple with one
feature or multiple features.
The simple usage shows below:
.. code-block:: python
place = fluid.CPUPlace()
data = fluid.layers.data(
name='data', shape=[1], dtype='int64', lod_level=2)
img = fluid.layers.data(name='image', shape=[1, 28, 28])
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
feeder = fluid.DataFeeder([data, label], place)
result = feeder.feed(
[([[1, 2, 3], [4, 5]], [1]), ([[6, 7, 8, 9]], [1])])
feeder = fluid.DataFeeder([img, label], fluid.CPUPlace())
result = feeder.feed([([0] * 784, [9]), ([1] * 784, [1])])
If you want to feed data into GPU side separately in advance when you
...
...
@@ -105,12 +101,15 @@ class DataFeeder(object):
Args:
feed_list(list): The Variables or Variables'name that will
feed into model.
place(Place): fluid.CPUPlace() or fluid.CUDAPlace(i).
place(Place): place indicates feed data into CPU or GPU, if you want to
feed data into GPU, please using `fluid.CUDAPlace(i)` (`i` represents
the GPU id), or if you want to feed data into CPU, please using
`fluid.CPUPlace()`.
program(Program): The Program that will feed data into, if program
is None, it will use default_main_program(). Default None.
Raises:
ValueError: If
the some Variable is not in the
Program.
ValueError: If
some Variable is not in this
Program.
Examples:
.. code-block:: python
...
...
@@ -119,7 +118,7 @@ class DataFeeder(object):
place = fluid.CPUPlace()
feed_list = [
main_program.global_block().var(var_name) for var_name in feed_vars_name
]
]
# feed_vars_name is a list of variables' name.
feeder = fluid.DataFeeder(feed_list, place)
for data in reader():
outs = exe.run(program=main_program,
...
...
@@ -156,8 +155,8 @@ class DataFeeder(object):
def
feed
(
self
,
iterable
):
"""
According to feed_list and iterable
converter the input data
into a dictionary that can feed into Executor or
ParallelExecutor.
According to feed_list and iterable
, converters the input into
a data structure that can feed into Executor and
ParallelExecutor.
Args:
iterable(list|tuple): the input data.
...
...
@@ -189,11 +188,11 @@ class DataFeeder(object):
def
feed_parallel
(
self
,
iterable
,
num_places
=
None
):
"""
Takes multiple mini-batches. Each mini-batch will be feed on each
device.
device
in advance
.
Args:
iterable(list|tuple): the input data.
num_places(int): the number of
pla
ces. Default None.
num_places(int): the number of
devi
ces. Default None.
Returns:
dict: the result of conversion.
...
...
python/paddle/fluid/executor.py
浏览文件 @
10cee7ed
...
...
@@ -135,14 +135,18 @@ def has_fetch_operators(block, fetch_targets, fetch_holder_name):
def
fetch_var
(
name
,
scope
=
None
,
return_numpy
=
True
):
"""
Fetch the value of the variable with the given name from the given scope
Fetch the value of the variable with the given name from the
given scope.
Args:
name(str): name of the variable. Typically, only persistable variables
can be found in the scope used for running the program.
scope(core.Scope|None): scope object. It should be the scope where
you pass to Executor.run() when running your program.
If None, global_scope() will be used.
return_numpy(bool): whether convert the tensor to numpy.ndarray
If None, global_scope() will be used. Default None.
return_numpy(bool): whether convert the tensor to numpy.ndarray.
Default True.
Returns:
LodTensor|numpy.ndarray
"""
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录