提交 9e258184 编写于 作者: Y Yi Wang

Merge branch 'develop' of http://github.com/paddlepaddle/paddle into...

Merge branch 'develop' of http://github.com/paddlepaddle/paddle into fix_cpplint_errors_operators_detail
# API Doc Standard
- [API Doc Structure](#API Doc Structure)
- [Format and Examples](#Format and Examples)
- [Complete Example](#Complete Example)
## API Doc Structure
API Doc should contain the following parts(please write them in order):
- Python API Definition
The definition of API
- Function Description
Description of API's function.
The description includes: meaning, purpose and operation on input of API, reference and corresponding link(if any), formula(if necessary) and explanations of key variables in the formula.
- Args Description
Description of API parameters.
Introduce parameters one by one according to the order in API definition.
The introduction includes: data type, default value(if any), meaning, etc.
- Returns
Introduction of API returned value.
Introduce meaning of returned value, provide correspoding format if necessary.
If returned value is a tuple containing multiple parameters, then introduce parameters one by one in order.
- Raises(if any)
Abnormality, error that may occur, and possible reasons. If there are more than one possible abnormity or error, they should be listed in order.
- Note(if any)
Matters needing attention. If there are more than one matters, they should be listed in order.
- Examples
Examples of how to use API.
## Format and Examples
API documentation must obey reStructuredText format, please refer to [here](http://sphinx-doc-zh.readthedocs.io/en/latest/rest.html).
Format and examples of each part of API documantation are as follows: (take fc for example)
- Python API Definition
- Format
[Python API Definition]
- Example
```
fc(input,
size,
num_flatten_dims=1,
param_attr=None,
bias_attr=None,
act=None,
name=None,
main_program=None,
startup_program=None)
```
- Function Description
- Format
This part contains (please write them in order):
[Function Description]
[Formula]
[Symbols' Descriptions if necessary]
[References if necessary]
- Example
[Function Description]
```
**Fully Connected Layer**
The fully connected layer can take multiple tensors as its inputs. It
creates a variable called weights for each input tensor, which represents
a fully connected weight matrix from each input unit to each output unit.
The fully connected layer multiplies each input tensor with its coresponding
weight to produce an output Tensor. If multiple input tensors are given,
the results of multiple multiplications will be sumed up. If bias_attr is
not None, a bias variable will be created and added to the output. Finally,
if activation is not None, it will be applied to the output as well.
```
[Formula]
```
This process can be formulated as follows:
.. math::
Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})
```
[Symbols' Descriptions if necessary]
```
In the above equation:
* :math:`N`: Number of the input.
* :math:`X_i`: The input tensor.
* :math:`W`: The weights created by this layer.
* :math:`b`: The bias parameter created by this layer (if needed).
* :math:`Act`: The activation function.
* :math:`Out`: The output tensor.
```
[References if necessary]
Since there is no need for reference of fc, we omit them here. Under other circumstances, please provide explicit reference and link, take layer_norm for example:
```
Refer to `Layer Normalization <https://arxiv.org/pdf/1607.06450v1.pdf>`_ for more details.
```
- Args Description
- Format
\[Arg's Name\][(Data Type, Default Value)][Description]
- Example
part of fc parameters are as follows:
```
Args:
input (Variable|list of Variable): The input tensor(s) of this layer, and the dimension of
the input tensor(s) is at least 2.
param_attr (ParamAttr|list of ParamAttr, default None): The parameter attribute for learnable
parameters/weights of this layer.
name (str, default None): The name of this layer.
```
- Returns
- Format
[Name][Shape]
- Example
```
Returns:
A tensor variable storing the transformation result.
```
when returned value contain more than one tuple, please introduce every parameter in order, take dynamic_lstm for example:
```
Returns:
A tuple containing:
The hidden state of LSTM whose shape is (T X D).
The cell state of LSTM whose shape is (T X D).
```
- Raises
- Format
[Exception Type][Condition]
- Example
```
Raises:
ValueError: If the rank of the input is less than 2.
```
- Note
- Format
[Note]
- Example
there is no Note in fc, so we omit this part. If there is any note, please write clearly. If there are more than one notes, please list them in order. Take scaled\_dot\_product\_attention for example:
```
Note:
1. When num_heads > 1, three linear projections are learned respectively
to map input queries, keys and values into queries', keys' and values'.
queries', keys' and values' have the same shapes with queries, keys
and values.
2. When num_heads == 1, scaled_dot_product_attention has no learnable
parameters.
```
- Examples
- Format
\[Python Code Snipper]
- Example
```
Examples:
.. code-block:: python
data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=data, size=1000, act="tanh")
```
## Complete Example
Complete Example of fc please see [here](src/fc.py)
...@@ -104,7 +104,7 @@ cc_test(init_test SRCS init_test.cc DEPS init) ...@@ -104,7 +104,7 @@ cc_test(init_test SRCS init_test.cc DEPS init)
cc_test(op_kernel_type_test SRCS op_kernel_type_test.cc DEPS place device_context framework_proto) cc_test(op_kernel_type_test SRCS op_kernel_type_test.cc DEPS place device_context framework_proto)
cc_test(cow_ptr_tests SRCS details/cow_ptr_test.cc) cc_test(cow_ptr_tests SRCS details/cow_ptr_test.cc)
# cc_test(channel_test SRCS channel_test.cc) cc_test(channel_test SRCS channel_test.cc)
cc_test(tuple_test SRCS tuple_test.cc ) cc_test(tuple_test SRCS tuple_test.cc )
cc_test(concurrency_test SRCS concurrency_test.cc DEPS go_op channel_close_op channel_create_op cc_test(concurrency_test SRCS concurrency_test.cc DEPS go_op channel_close_op channel_create_op
channel_send_op channel_recv_op sum_op select_op elementwise_add_op compare_op channel_send_op channel_recv_op sum_op select_op elementwise_add_op compare_op
......
...@@ -138,8 +138,8 @@ void ChannelImpl<T>::Send(T *item) { ...@@ -138,8 +138,8 @@ void ChannelImpl<T>::Send(T *item) {
// If channel is closed, throw exception // If channel is closed, throw exception
if (closed_) { if (closed_) {
lock.unlock();
send_return(); send_return();
lock.unlock();
PADDLE_THROW("Cannot send on closed channel"); PADDLE_THROW("Cannot send on closed channel");
} }
...@@ -152,11 +152,9 @@ void ChannelImpl<T>::Send(T *item) { ...@@ -152,11 +152,9 @@ void ChannelImpl<T>::Send(T *item) {
if (m != nullptr) { if (m != nullptr) {
*(m->data) = std::move(*item); *(m->data) = std::move(*item);
m->Notify(); m->Notify();
lock.unlock();
send_return(); send_return();
return; return;
} else { } else {
lock.unlock();
Send(item); Send(item);
send_return(); send_return();
return; return;
...@@ -169,8 +167,6 @@ void ChannelImpl<T>::Send(T *item) { ...@@ -169,8 +167,6 @@ void ChannelImpl<T>::Send(T *item) {
if (buf_.size() < cap_) { if (buf_.size() < cap_) {
// Copy to buffer // Copy to buffer
buf_.push_back(std::move(*item)); buf_.push_back(std::move(*item));
// Release lock and return true
lock.unlock();
send_return(); send_return();
return; return;
} }
...@@ -181,8 +177,8 @@ void ChannelImpl<T>::Send(T *item) { ...@@ -181,8 +177,8 @@ void ChannelImpl<T>::Send(T *item) {
sendq.push_back(m); sendq.push_back(m);
m->Wait(lock); m->Wait(lock);
if (m->chan_closed) { if (m->chan_closed) {
lock.unlock();
send_return(); send_return();
lock.unlock();
PADDLE_THROW("Cannot send on closed channel"); PADDLE_THROW("Cannot send on closed channel");
} }
send_return(); send_return();
...@@ -195,10 +191,7 @@ bool ChannelImpl<T>::Receive(T *item) { ...@@ -195,10 +191,7 @@ bool ChannelImpl<T>::Receive(T *item) {
// If channel is closed and buffer is empty or // If channel is closed and buffer is empty or
// channel is unbuffered // channel is unbuffered
if (closed_ && buf_.empty()) { if (closed_ && buf_.empty()) return recv_return(false);
lock.unlock();
return recv_return(false);
}
// If there is a sender, directly receive the value we want // If there is a sender, directly receive the value we want
// from the sender. In case of a buffered channel, read from // from the sender. In case of a buffered channel, read from
...@@ -229,7 +222,6 @@ bool ChannelImpl<T>::Receive(T *item) { ...@@ -229,7 +222,6 @@ bool ChannelImpl<T>::Receive(T *item) {
} else } else
return recv_return(Receive(item)); return recv_return(Receive(item));
} }
lock.unlock();
return recv_return(true); return recv_return(true);
} }
...@@ -238,8 +230,7 @@ bool ChannelImpl<T>::Receive(T *item) { ...@@ -238,8 +230,7 @@ bool ChannelImpl<T>::Receive(T *item) {
// Directly read from buffer // Directly read from buffer
*item = std::move(buf_.front()); *item = std::move(buf_.front());
buf_.pop_front(); buf_.pop_front();
// Release lock and return true // return true
lock.unlock();
return recv_return(true); return recv_return(true);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册