提交 4e377f8e 编写于 作者: N nhzlx

Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into enhance_for_tensorrt_infer

...@@ -1768,3 +1768,11 @@ reverse ...@@ -1768,3 +1768,11 @@ reverse
.. autofunction:: paddle.fluid.layers.reverse .. autofunction:: paddle.fluid.layers.reverse
:noindex: :noindex:
.. _api_fluid_layers_rank_loss:
rank_loss
-------
.. autofunction:: paddle.fluid.layers.rank_loss
:noindex:
...@@ -456,52 +456,124 @@ def py_reader(capacity, ...@@ -456,52 +456,124 @@ def py_reader(capacity,
name=None, name=None,
use_double_buffer=True): use_double_buffer=True):
""" """
Create a reader and blocking queue for data feeding in Python Create a Python reader for data feeding in Python
This layer returns a Reader Variable and a BlockingQueue. This layer returns a Reader Variable.
The BlockingQueue provides `push()` method to push a `LoDTensorArray` The Reader provides :code:`decorate_paddle_reader()` and
object into the queue in Python side. In C++ side, the Reader :code:`decorate_tensor_provider()` to set a Python generator as the data
Variable would invoke `pop()` method of the queue to retrieve the source in Python side. When :code:`Executor::Run()` is invoked in C++
feeding data. The process of feeding data in Python side and fetching side, the data from the generator would be read automatically. Unlike
data in C++ side can run in parallel. The BlockingQueue should be closed :code:`DataFeeder.feed()`, the data reading process and
using `close()` method when unused. :code:`Executor::Run()` process can run in parallel using
:code:`py_reader`. The :code:`start()` method of the Reader should be
called when each pass begins, while the :code:`reset()` method should be
called when the pass ends and :code:`fluid.core.EOFException` raises.
Note that :code:`Program.clone()` method cannot clone :code:`py_reader`.
Args: Args:
use_double_buffer(bool): Whether use double buffer or not. capacity(int): The buffer capacity maintained by :code:`py_reader`.
capacity(int): The maximum capacity of the BlockingQueue.
shapes(list|tuple): List of tuples which declaring data shapes. shapes(list|tuple): List of tuples which declaring data shapes.
dtypes(list|tuple): List of strs which declaring data type. dtypes(list|tuple): List of strs which declaring data type.
lod_levels(list|tuple): List of ints which declaring data lod_level. lod_levels(list|tuple): List of ints which declaring data lod_level.
name(basestring): The prefix Python queue name and Reader name. None will name(basestring): The prefix Python queue name and Reader name. None will
be generated automatically. be generated automatically.
use_double_buffer(bool): Whether use double buffer or not.
Returns: Returns:
tuple(Variable, BlockingQueue): Variable: A Reader from which we can get feeding data.
A Reader Variable from which we can get feeding data.
A BlockingQueue object for data feeding.
Examples: Examples:
.. code-block:: python 1. The basic usage of :code:`py_reader` is as follows:
reader, queue = fluid.layers.py_reader( >>> import paddle.v2
capacity=10, >>> import paddle.fluid as fluid
shapes=[[-1,3,224,224], [-1,1]], >>> import paddle.dataset.mnist as mnist
dtypes=['float32', 'int64']) >>>
# Via the reader, we can use 'read_file' layer to get data: >>> reader = fluid.layers.py_reader(capacity=64,
image, label = fluid.layers.read_file(reader) >>> shapes=[(-1,3,224,224), (-1,1)],
>>> dtypes=['float32', 'int64'])
# Via the blocking queue, we can feed data using threads >>> reader.decorate_paddle_reader(
def feed_data(queue, feed_images, feed_labels): >>> paddle.v2.reader.shuffle(paddle.batch(mnist.train())
for feed_image, feed_label in zip(feed_images, feed_labels): >>>
data = core.LoDTensorArray() >>> img, label = fluid.layers.read_file(reader)
data.append(feed_image) >>> loss = network(img, label) # some network definition
data.append(feed_label) >>>
queue.push(data) >>> fluid.Executor(fluid.CUDAPlace(0)).run(fluid.default_startup_program())
>>>
thread = threading.Thread(target=feed_data, args=(queue, feed_images, feed_labels)) >>> exe = fluid.ParallelExecutor(use_cuda=True, loss_name=loss.name)
thread.start() >>> for epoch_id in range(10):
>>> reader.start()
>>> try:
>>> while True:
>>> exe.run(fetch_list=[loss.name])
>>> except fluid.core.EOFException:
>>> reader.reset()
2. When training and testing are both performed, two different
:code:`py_reader` should be created with different names, e.g.:
>>> import paddle.v2
>>> import paddle.fluid as fluid
>>> import paddle.dataset.mnist as mnist
>>>
>>> def network(reader):
>>> img, label = fluid.layers.read_file(reader)
>>> # Here, we omitted the network definition
>>> return loss
>>>
>>> train_reader = fluid.layers.py_reader(capacity=64,
>>> shapes=[(-1,3,224,224), (-1,1)],
>>> dtypes=['float32', 'int64'],
>>> name='train_reader')
>>> train_reader.decorate_paddle_reader(
>>> paddle.v2.reader.shuffle(paddle.batch(mnist.train())
>>>
>>> test_reader = fluid.layers.py_reader(capacity=32,
>>> shapes=[(-1,3,224,224), (-1,1)],
>>> dtypes=['float32', 'int64'],
>>> name='test_reader')
>>> test_reader.decorate_paddle_reader(paddle.batch(mnist.test(), 512))
>>>
>>> # Create train_main_prog and train_startup_prog
>>> train_main_prog = fluid.Program()
>>> train_startup_prog = fluid.Program()
>>> with fluid.program_guard(train_main_prog, train_startup_prog):
>>> # Use fluid.unique_name.guard() to share parameters with test program
>>> with fluid.unique_name.guard():
>>> train_loss = network(train_reader) # some network definition
>>> adam = fluid.optimizer.Adam(learning_rate=0.01)
>>> adam.minimize(loss)
>>>
>>> # Create test_main_prog and test_startup_prog
>>> test_main_prog = fluid.Program()
>>> test_startup_prog = fluid.Program()
>>> with fluid.program_guard(test_main_prog, test_startup_prog):
>>> # Use fluid.unique_name.guard() to share parameters with train program
>>> with fluid.unique_name.guard():
>>> test_loss = network(test_reader)
>>>
>>> fluid.Executor(fluid.CUDAPlace(0)).run(train_startup_prog)
>>> fluid.Executor(fluid.CUDAPlace(0)).run(test_startup_prog)
>>>
>>> train_exe = fluid.ParallelExecutor(use_cuda=True,
>>> loss_name=train_loss.name, main_program=train_main_prog)
>>> test_exe = fluid.ParallelExecutor(use_cuda=True,
>>> loss_name=test_loss.name, main_program=test_main_prog)
>>> for epoch_id in range(10):
>>> train_reader.start()
>>> try:
>>> while True:
>>> train_exe.run(fetch_list=[train_loss.name])
>>> except fluid.core.EOFException:
>>> train_reader.reset()
>>>
>>> test_reader.start()
>>> try:
>>> while True:
>>> test_exe.run(fetch_list=[test_loss.name])
>>> except fluid.core.EOFException:
>>> test_reader.reset()
""" """
dtypes = [convert_np_dtype_to_dtype_(dt) for dt in dtypes] dtypes = [convert_np_dtype_to_dtype_(dt) for dt in dtypes]
shape_concat = [] shape_concat = []
......
...@@ -110,6 +110,7 @@ __all__ = [ ...@@ -110,6 +110,7 @@ __all__ = [
'relu', 'relu',
'log', 'log',
'crop', 'crop',
'rank_loss',
] ]
...@@ -5282,3 +5283,74 @@ def crop(x, shape=None, offsets=None, name=None): ...@@ -5282,3 +5283,74 @@ def crop(x, shape=None, offsets=None, name=None):
outputs={'Out': out}, outputs={'Out': out},
attrs=None if len(attrs) == 0 else attrs) attrs=None if len(attrs) == 0 else attrs)
return out return out
def rank_loss(label, left, right, name=None):
"""
**Rank loss layer for RankNet**
RankNet(http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf)
is a pairwise ranking model with a training sample consisting of a pair
of documents, A and B. Label P indicates whether A is ranked higher than B
or not:
P = {0, 1} or {0, 0.5, 1}, where 0.5 means that there is no information
about the rank of the input pair.
Rank loss layer takes three inputs: left (o_i), right (o_j) and
label (P_{i,j}). The inputs respectively represent RankNet's output scores
for documents A and B and the value of label P. The following equation
computes rank loss C_{i,j} from the inputs:
$$
C_{i,j} = -\tilde{P_{ij}} * o_{i,j} + \log(1 + e^{o_{i,j}}) \\
o_{i,j} = o_i - o_j \\
\tilde{P_{i,j}} = \left \{0, 0.5, 1 \right \} \ or \ \left \{0, 1 \right \}
$$
Rank loss layer takes batch inputs with size batch_size (batch_size >= 1).
Args:
label (Variable): Indicats whether A ranked higher than B or not.
left (Variable): RankNet's output score for doc A.
right (Variable): RankNet's output score for doc B.
name(str|None): A name for this layer(optional). If set None, the layer
will be named automatically.
Returns:
list: The value of rank loss.
Raises:
ValueError: Any of label, left, and right is not a variable.
Examples:
.. code-block:: python
label = fluid.layers.data(name="label", shape=[4, 1], dtype="float32")
left = fluid.layers.data(name="left", shape=[4, 1], dtype="float32")
right = fluid.layers.data(name="right", shape=[4, 1], dtype="float32")
out = fluid.layers.rank_loss(label, left, right)
"""
helper = LayerHelper('rank_loss', **locals())
if not (isinstance(label, Variable)):
raise ValueError("The label should be a Variable")
if not (isinstance(left, Variable)):
raise ValueError("The left should be a Variable")
if not (isinstance(right, Variable)):
raise ValueError("The right should be a Variable")
out = helper.create_tmp_variable("float32")
helper.append_op(
type='rank_loss',
inputs={"Label": label,
"Left": left,
"Right": right},
outputs={'Out': out})
return out
...@@ -443,6 +443,28 @@ class TestBook(unittest.TestCase): ...@@ -443,6 +443,28 @@ class TestBook(unittest.TestCase):
self.assertIsNotNone(ids) self.assertIsNotNone(ids)
print(str(program)) print(str(program))
def test_rank_loss(self):
program = Program()
with program_guard(program):
label = layers.data(
name='label',
append_batch_size=False,
shape=[16, 1],
dtype="float32")
left = layers.data(
name='left',
append_batch_size=False,
shape=[16, 1],
dtype="float32")
right = layers.data(
name='right',
append_batch_size=False,
shape=[16, 1],
dtype="float32")
out = layers.rank_loss(label, left, right, name="rank_loss")
self.assertIsNotNone(out)
print(str(program))
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册