eval = seqtext_printer_evaluator(input=maxid,
+eval = seqtext_evaluator.printer(input=maxid,
id_input=sample_id,
dict_file=dict_file,
result_file=result_file)
@@ -683,13 +683,13 @@ Default is True. No space is added if set to False.
value_printer
-
+
-
-class
paddle.v2.evaluator.
value_printer
(*args, **kwargs)
+paddle.v2.evaluator.
value_printer
(*args, **xargs)
This Evaluator is used to print the values of input layers. It contains
one or more input layers.
The simple usage is:
-eval = value_printer_evaluator(input)
+eval = value_evaluator.printer(input)
diff --git a/develop/doc/api/v2/config/layer.html b/develop/doc/api/v2/config/layer.html
index cc431b0f0573523bebac84138c869f47099791dd..880901bf6e7e99f39aa7e6288fed4defc73fb040 100644
--- a/develop/doc/api/v2/config/layer.html
+++ b/develop/doc/api/v2/config/layer.html
@@ -189,35 +189,10 @@
Data layer
data
-
+
-
-class
paddle.v2.layer.
data
(name, type, **kwargs)
-Define DataLayer For NeuralNetwork.
-The example usage is:
-data = paddle.layer.data(name="input", type=paddle.data_type.dense_vector(1000))
-
-
-
-
-
-
-Parameters: |
-- name (basestring) – Name of this data layer.
-- type – Data type of this data layer
-- height (int|None) – Height of this data layer, used for image
-- width (int|None) – Width of this data layer, used for image
-- layer_attr (paddle.v2.attr.ExtraAttribute) – Extra Layer Attribute.
-
- |
-
-Returns: | paddle.v2.config_base.Layer object.
- |
-
-Return type: | paddle.v2.config_base.Layer
- |
-
-
-
+paddle.v2.layer.
data
+alias of name
@@ -228,7 +203,7 @@
fc
-
-class
paddle.v2.layer.
fc
(*args, **kwargs)
+class paddle.v2.layer.
fc
Helper for declare fully connected layer.
The example usage is:
fc = fc(input=layer,
@@ -274,7 +249,7 @@ default Bias.
selective_fc
-
-class
paddle.v2.layer.
selective_fc
(*args, **kwargs)
+class paddle.v2.layer.
selective_fc
Selectived fully connected layer. Different from fc, the output
of this layer maybe sparse. It requires an additional input to indicate
several selected columns for output. If the selected columns is not
@@ -321,7 +296,7 @@ default Bias.
conv_operator
-
-class
paddle.v2.layer.
conv_operator
(**kwargs)
+class paddle.v2.layer.
conv_operator
Different from img_conv, conv_op is an Operator, which can be used
in mixed. And conv_op takes two inputs to perform convolution.
The first input is the image and the second is filter kernel. It only
@@ -369,7 +344,7 @@ the filter’s shape can be (filter_size, filter_size_y).
conv_projection
-
-class
paddle.v2.layer.
conv_projection
(**kwargs)
+class paddle.v2.layer.
conv_projection
Different from img_conv and conv_op, conv_projection is an Projection,
which can be used in mixed and conat. It use cudnn to implement
conv and only support GPU mode.
@@ -417,7 +392,7 @@ the filter’s shape can be (filter_size, filter_size_y).
conv_shift
-
-class
paddle.v2.layer.
conv_shift
(*args, **kwargs)
+class paddle.v2.layer.
conv_shift
- This layer performs cyclic convolution for two input. For example:
@@ -470,7 +445,7 @@ the right size (which is the end of array) to the left.
img_conv
-
-class
paddle.v2.layer.
img_conv
(*args, **kwargs)
+class paddle.v2.layer.
img_conv
Convolution layer for image. Paddle can support both square and non-square
input currently.
The details of convolution layer, please refer UFLDL’s convolution .
@@ -548,7 +523,7 @@ otherwise layer_type has to be either “exconv” or
context_projection
-
-class
paddle.v2.layer.
context_projection
(**kwargs)
+class paddle.v2.layer.
context_projection
Context Projection.
It just simply reorganizes input sequence, combines “context_len” sequence
to one context from context_start. “context_start” will be set to
@@ -591,7 +566,7 @@ parameter attribute is set by this parameter.
img_pool
-
-class
paddle.v2.layer.
img_pool
(*args, **kwargs)
+class paddle.v2.layer.
img_pool
Image pooling Layer.
The details of pooling layer, please refer ufldl’s pooling .
@@ -655,7 +630,7 @@ Defalut is True. If set false, Otherwise use floor.
spp
-
-class
paddle.v2.layer.
spp
(*args, **kwargs)
+class paddle.v2.layer.
spp
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.
The details please refer to
Kaiming He’s paper.
@@ -695,7 +670,7 @@ The details please refer to
maxout
-
-class
paddle.v2.layer.
maxout
(*args, **kwargs)
+class paddle.v2.layer.
maxout
- A layer to do max out on conv layer output.
@@ -752,7 +727,7 @@ automatically from previous output.
img_cmrnorm
-
-class
paddle.v2.layer.
img_cmrnorm
(*args, **kwargs)
+class paddle.v2.layer.
img_cmrnorm
Response normalization across feature maps.
The details please refer to
Alex’s paper.
@@ -791,7 +766,7 @@ num_channels is None, it will be set automatically.
batch_norm
-
-class
paddle.v2.layer.
batch_norm
(*args, **kwargs)
+class paddle.v2.layer.
batch_norm
Batch Normalization Layer. The notation of this layer as follow.
\(x\) is the input features over a mini-batch.
@@ -863,7 +838,7 @@ computation, referred to as facotr,
sum_to_one_norm
-
-class
paddle.v2.layer.
sum_to_one_norm
(*args, **kwargs)
+class paddle.v2.layer.
sum_to_one_norm
A layer for sum-to-one normalization,
which is used in NEURAL TURING MACHINE.
@@ -900,7 +875,7 @@ and
\(out\) is a (batchSize x dataDim) output vector.<
cross_channel_norm
-
-class
paddle.v2.layer.
cross_channel_norm
(*args, **kwargs)
+class paddle.v2.layer.
cross_channel_norm
Normalize a layer’s output. This layer is necessary for ssd.
This layer applys normalize across the channels of each sample to
a conv layer’s output and scale the output by a group of trainable
@@ -931,7 +906,7 @@ factors which dimensions equal to the channel’s number.
recurrent
-
-class
paddle.v2.layer.
recurrent
(*args, **kwargs)
+class paddle.v2.layer.
recurrent
Simple recurrent unit layer. It is just a fully connect layer through both
time and neural network.
For each sequence [start, end] it performs the following computation:
@@ -971,7 +946,7 @@ out_{i} = act(in_{i} + out_{i+1} * W) \ \ \text{for} \ start <= i < end\en
lstmemory
-
-class
paddle.v2.layer.
lstmemory
(*args, **kwargs)
+class paddle.v2.layer.
lstmemory
Long Short-term Memory Cell.
The memory cell was implemented as follow equations.
@@ -1020,7 +995,7 @@ bias.
grumemory
-
-class
paddle.v2.layer.
grumemory
(*args, **kwargs)
+class paddle.v2.layer.
grumemory
Gate Recurrent Unit Layer.
The memory cell was implemented as follow equations.
1. update gate \(z\): defines how much of the previous memory to
@@ -1092,7 +1067,7 @@ will get a warning.
memory
-
-class
paddle.v2.layer.
memory
(name, extra_input=None, **kwargs)
+class paddle.v2.layer.
memory
The memory layers is a layer cross each time step. Reference this output
as previous time step layer name
‘s output.
The default memory is zero in first time step, previous time step’s
@@ -1101,12 +1076,12 @@ output in the rest time steps.
with activation.
If boot_with_const_id, then the first time stop is a IndexSlot, the
Arguments.ids()[0] is this cost_id
.
-If boot_layer is not null, the memory is just the boot_layer’s output.
+
If boot is not null, the memory is just the boot’s output.
Set is_seq
is true boot layer is sequence.
The same name layer in recurrent group will set memory on each time
step.
mem = memory(size=256, name='state')
-state = fc_layer(input=mem, size=256, name='state')
+state = fc(input=mem, size=256, name='state')
If you do not want to specify the name, you can equivalently use set_input()
@@ -1122,18 +1097,18 @@ name of the layer which this memory remembers.
- size (int) – size of memory.
- memory_name (basestring) – the name of the memory.
It is ignored when name is provided.
-- is_seq (bool) – is sequence for boot_layer
-- boot_layer (LayerOutput|None) – boot layer of memory.
-- boot_bias (ParameterAttribute|None) – boot layer’s bias
-- boot_bias_active_type (BaseActivation) – boot layer’s active type.
+- is_seq (bool) – is sequence for boot
+- boot (paddle.v2.config_base.Layer|None) – boot layer of memory.
+- boot_bias (paddle.v2.attr.ParameterAttribute|None) – boot layer’s bias
+- boot_bias_active_type (paddle.v2.Activation.Base) – boot layer’s active type.
- boot_with_const_id (int) – boot layer’s id.
-Returns: | LayerOutput object which is a memory.
+ |
---|
Returns: | paddle.v2.config_base.Layer object which is a memory.
|
-Return type: | LayerOutput
+ |
---|
Return type: | paddle.v2.config_base.Layer
|
@@ -1153,9 +1128,9 @@ sequence input. This is extremely usefull for attention based model, or
Neural Turning Machine like models.
The basic usage (time steps) is:
def step(input):
- output = fc_layer(input=layer,
+ output = fc(input=layer,
size=1024,
- act=LinearActivation(),
+ act=paddle.v2.Activation.Linear(),
bias_attr=False)
return output
@@ -1165,8 +1140,8 @@ Neural Turning Machine like models.
You can see following configs for further usages:
-- time steps: lstmemory_group, paddle/gserver/tests/sequence_layer_group.conf, demo/seqToseq/seqToseq_net.py
-- sequence steps: paddle/gserver/tests/sequence_nest_layer_group.conf
+- time steps: lstmemory_group, paddle/gserver/tests/sequence_group.conf, demo/seqToseq/seqToseq_net.py
+- sequence steps: paddle/gserver/tests/sequence_nest_group.conf
@@ -1182,24 +1157,24 @@ a time step result. Then gather each time step of output into
layer group’s output.
- name (basestring) – recurrent_group’s name.
-- input (LayerOutput|StaticInput|SubsequenceInput|list|tuple) –
Input links array.
-LayerOutput will be scattered into time steps.
+
- input (paddle.v2.config_base.Layer|StaticInput|SubsequenceInput|list|tuple) –
Input links array.
+paddle.v2.config_base.Layer will be scattered into time steps.
SubsequenceInput will be scattered into sequence steps.
StaticInput will be imported to each time step, and doesn’t change
through time. It’s a mechanism to access layer outside step function.
- reverse (bool) – If reverse is set true, the recurrent unit will process the
input sequence in a reverse order.
-- targetInlink (LayerOutput|SubsequenceInput) –
the input layer which share info with layer group’s output
+ - targetInlink (paddle.v2.config_base.Layer|SubsequenceInput) –
the input layer which share info with layer group’s output
Param input specifies multiple input layers. For
SubsequenceInput inputs, config should assign one input
layer that share info(the number of sentences and the number
of words in each sentence) with all layer group’s outputs.
targetInlink should be one of the layer group’s input.
-- is_generating – If is generating, none of input type should be LayerOutput;
+
- is_generating – If is generating, none of input type should be paddle.v2.config_base.Layer;
else, for training or testing, one of the input type must
-be LayerOutput.
+be paddle.v2.config_base.Layer.
@@ -1210,9 +1185,9 @@ be LayerOutput.
-Returns: | LayerOutput object. |
+
---|
Returns: | paddle.v2.config_base.Layer object. |
-Return type: | LayerOutput |
+
---|
Return type: | paddle.v2.config_base.Layer |
@@ -1223,7 +1198,7 @@ be LayerOutput.
lstm_step
-
-class
paddle.v2.layer.
lstm_step
(*args, **kwargs)
+class paddle.v2.layer.
lstm_step
LSTM Step Layer. It used in recurrent_group. The lstm equations are shown
as follow.
@@ -1273,7 +1248,7 @@ be sigmoid only.
gru_step
-
-class
paddle.v2.layer.
gru_step
(*args, **kwargs)
+class paddle.v2.layer.
gru_step
@@ -1314,7 +1289,7 @@ to maintain tractability.
The example usage is:
def rnn_step(input):
last_time_step_output = memory(name='rnn', size=512)
- with mixed_layer(size=512, name='rnn') as simple_rnn:
+ with mixed(size=512, name='rnn') as simple_rnn:
simple_rnn += full_matrix_projection(input)
simple_rnn += last_time_step_output
return simple_rnn
@@ -1376,7 +1351,7 @@ beam size.
Returns: | The generated word index.
|
-Return type: | LayerOutput
+ |
---|
Return type: | paddle.v2.config_base.Layer
|
@@ -1388,7 +1363,7 @@ beam size.
get_output
-
-class
paddle.v2.layer.
get_output
(*args, **kwargs)
+class paddle.v2.layer.
get_output
Get layer’s output by name. In PaddlePaddle, a layer might return multiple
values, but returns one layer’s output. If the user wants to use another
output besides the default one, please use get_output first to get
@@ -1429,17 +1404,17 @@ multiple outputs.
Each inputs is a projection or operator.
There are two styles of usages.
-- When not set inputs parameter, use mixed_layer like this:
+- When not set inputs parameter, use mixed like this:
-with mixed_layer(size=256) as m:
+with mixed(size=256) as m:
m += full_matrix_projection(input=layer1)
m += identity_projection(input=layer2)
-- You can also set all inputs when invoke mixed_layer as follows:
+- You can also set all inputs when invoke mixed as follows:
-m = mixed_layer(size=256,
+m = mixed(size=256,
input=[full_matrix_projection(input=layer1),
full_matrix_projection(input=layer2)])
@@ -1453,11 +1428,11 @@ Each inputs is a projection or operator.
- size (int) – layer size.
- input – inputs layer. It is an optional parameter. If set,
then this function will just return layer’s name.
-
- act (BaseActivation) – Activation Type.
-
- bias_attr (ParameterAttribute or None or bool) – The Bias Attribute. If no bias, then pass False or
-something not type of ParameterAttribute. None will get a
+
- act (paddle.v2.Activation.Base) – Activation Type.
+
- bias_attr (paddle.v2.attr.ParameterAttribute or None or bool) – The Bias Attribute. If no bias, then pass False or
+something not type of paddle.v2.attr.ParameterAttribute. None will get a
default Bias.
-
- layer_attr (ExtraLayerAttribute) – The extra layer config. Default is None.
+
- layer_attr (paddle.v2.attr.ExtraAttribute) – The extra layer config. Default is None.
@@ -1476,7 +1451,7 @@ default Bias.
embedding
-
-class
paddle.v2.layer.
embedding
(*args, **kwargs)
+class paddle.v2.layer.
embedding
Define a embedding Layer.
@@ -1507,7 +1482,7 @@ for details.
scaling_projection
-
-class
paddle.v2.layer.
scaling_projection
(**kwargs)
+class paddle.v2.layer.
scaling_projection
scaling_projection multiplies the input with a scalar parameter and add to
the output.
@@ -1541,7 +1516,7 @@ the output.
dotmul_projection
-
-class
paddle.v2.layer.
dotmul_projection
(**kwargs)
+class paddle.v2.layer.
dotmul_projection
DotMulProjection with a layer as input.
It performs element-wise multiplication with weight.
@@ -1576,7 +1551,7 @@ It performs element-wise multiplication with weight.
dotmul_operator
-
-class
paddle.v2.layer.
dotmul_operator
(**kwargs)
+class paddle.v2.layer.
dotmul_operator
DotMulOperator takes two inputs and performs element-wise multiplication:
\[out.row[i] += scale * (a.row[i] .* b.row[i])\]
@@ -1612,7 +1587,7 @@ scale is a config scalar, its default value is one.
full_matrix_projection
-
-class
paddle.v2.layer.
full_matrix_projection
(**kwargs)
+class paddle.v2.layer.
full_matrix_projection
Full Matrix Projection. It performs full matrix multiplication.
\[out.row[i] += in.row[i] * weight\]
@@ -1658,7 +1633,7 @@ scale is a config scalar, its default value is one.
identity_projection
-
-class
paddle.v2.layer.
identity_projection
(**kwargs)
+class paddle.v2.layer.
identity_projection
- IdentityProjection if offset=None. It performs:
@@ -1704,7 +1679,7 @@ It select dimesions [offset, offset+layer_size) from input:
table_projection
-
-class
paddle.v2.layer.
table_projection
(**kwargs)
+class paddle.v2.layer.
table_projection
Table Projection. It selects rows from parameter where row_id
is in input_ids.
@@ -1753,7 +1728,7 @@ and
\(i\) is row_id.
trans_full_matrix_projection
-
-class
paddle.v2.layer.
trans_full_matrix_projection
(**kwargs)
+class paddle.v2.layer.
trans_full_matrix_projection
Different from full_matrix_projection, this projection performs matrix
multiplication, using transpose of weight.
@@ -1821,7 +1796,7 @@ sequence of a nested sequence,
pooling
-
-class
paddle.v2.layer.
pooling
(*args, **kwargs)
+class paddle.v2.layer.
pooling
Pooling layer for sequence inputs, not used for Image.
The example usage is:
seq_pool = pooling(input=layer,
@@ -1860,7 +1835,7 @@ SumPooling, SquareRootNPooling.
last_seq
-
-class
paddle.v2.layer.
last_seq
(*args, **kwargs)
+class paddle.v2.layer.
last_seq
Get Last Timestamp Activation of a sequence.
If stride > 0, this layer slides a window whose size is determined by stride,
and return the last value of the window as the output. Thus, a long sequence
@@ -1898,7 +1873,7 @@ of stride is -1.
first_seq
-
-class
paddle.v2.layer.
first_seq
(*args, **kwargs)
+class paddle.v2.layer.
first_seq
Get First Timestamp Activation of a sequence.
If stride > 0, this layer slides a window whose size is determined by stride,
and return the first value of the window as the output. Thus, a long sequence
@@ -1936,7 +1911,7 @@ of stride is -1.
concat
-
-class
paddle.v2.layer.
concat
(*args, **kwargs)
+class paddle.v2.layer.
concat
Concat all input vector into one huge vector.
Inputs can be list of paddle.v2.config_base.Layer or list of projection.
The example usage is:
@@ -1970,7 +1945,7 @@ Inputs can be list of paddle.v2.config_base.Layer or list of projection.
seq_concat
-
-class
paddle.v2.layer.
seq_concat
(*args, **kwargs)
+class paddle.v2.layer.
seq_concat
Concat sequence a with sequence b.
- Inputs:
@@ -2020,7 +1995,7 @@ default Bias.
block_expand
-
-class
paddle.v2.layer.
block_expand
(*args, **kwargs)
+class paddle.v2.layer.
block_expand
- Expand feature map to minibatch matrix.
@@ -2096,7 +2071,7 @@ sequence of a nested sequence, ¶
-
-class
paddle.v2.layer.
expand
(*args, **kwargs)
+class paddle.v2.layer.
expand
A layer for “Expand Dense data or (sequence data where the length of each
sequence is one) to sequence data.”
The example usage is:
@@ -2135,7 +2110,7 @@ bias.
repeat
-
-class
paddle.v2.layer.
repeat
(*args, **kwargs)
+class paddle.v2.layer.
repeat
A layer for repeating the input for num_repeats times. This is equivalent
to apply concat() with num_repeats same input.
@@ -2171,7 +2146,7 @@ to apply concat() with num_repeats same input.
rotate
-
-class
paddle.v2.layer.
rotate
(*args, **kwargs)
+class paddle.v2.layer.
rotate
A layer for rotating 90 degrees (clock-wise) for each feature channel,
usually used when the input sample is some image or feature map.
@@ -2210,7 +2185,7 @@ usually used when the input sample is some image or feature map.
seq_reshape
-
-class
paddle.v2.layer.
seq_reshape
(*args, **kwargs)
+class paddle.v2.layer.
seq_reshape
A layer for reshaping the sequence. Assume the input sequence has T instances,
the dimension of each instance is M, and the input reshape_size is N, then the
output sequence has T*M/N instances, the dimension of each instance is N.
@@ -2253,7 +2228,7 @@ default Bias.
addto
-
-class
paddle.v2.layer.
addto
(*args, **kwargs)
+class paddle.v2.layer.
addto
AddtoLayer.
\[y = f(\sum_{i} x_i + b)\]
@@ -2305,7 +2280,7 @@ bias.
linear_comb
-
-class
paddle.v2.layer.
linear_comb
(*args, **kwargs)
+class paddle.v2.layer.
linear_comb
- A layer for weighted sum of vectors takes two inputs.
@@ -2368,7 +2343,7 @@ processed in one batch.
interpolation
-
-class
paddle.v2.layer.
interpolation
(*args, **kwargs)
+class paddle.v2.layer.
interpolation
This layer is for linear interpolation with two inputs,
which is used in NEURAL TURING MACHINE.
@@ -2407,7 +2382,7 @@ which is used in NEURAL TURING MACHINE.
bilinear_interp
-
-class
paddle.v2.layer.
bilinear_interp
(*args, **kwargs)
+class paddle.v2.layer.
bilinear_interp
This layer is to implement bilinear interpolation on conv layer output.
Please refer to Wikipedia: https://en.wikipedia.org/wiki/Bilinear_interpolation
The simple usage is:
@@ -2442,7 +2417,7 @@ which is used in NEURAL TURING MACHINE.
power
-
-class
paddle.v2.layer.
power
(*args, **kwargs)
+class paddle.v2.layer.
power
This layer applies a power function to a vector element-wise,
which is used in NEURAL TURING MACHINE.
@@ -2480,7 +2455,7 @@ and
\(y\) is a output vector.
scaling
-
-class
paddle.v2.layer.
scaling
(*args, **kwargs)
+class paddle.v2.layer.
scaling
A layer for multiplying input vector by weight scalar.
\[y = w x\]
@@ -2519,7 +2494,7 @@ processed in one batch.
slope_intercept
-
-class
paddle.v2.layer.
slope_intercept
(*args, **kwargs)
+class paddle.v2.layer.
slope_intercept
This layer for applying a slope and an intercept to the input
element-wise. There is no activation and weight.
@@ -2556,7 +2531,7 @@ element-wise. There is no activation and weight.
tensor
-
-class
paddle.v2.layer.
tensor
(*args, **kwargs)
+class paddle.v2.layer.
tensor
This layer performs tensor operation for two input.
For example, each sample:
@@ -2609,7 +2584,7 @@ default Bias.
cos_sim
-
-class
paddle.v2.layer.
cos_sim
(*args, **kwargs)
+class paddle.v2.layer.
cos_sim
Cosine Similarity Layer. The cosine similarity equation is here.
\[similarity = cos(\theta) = {\mathbf{a} \cdot \mathbf{b}
@@ -2652,7 +2627,7 @@ processed in one batch.
trans
-
-class
paddle.v2.layer.
trans
(*args, **kwargs)
+class paddle.v2.layer.
trans
A layer for transposing a minibatch matrix.
\[y = x^\mathrm{T}\]
@@ -2690,7 +2665,7 @@ processed in one batch.
maxid
-
-class
paddle.v2.layer.
max_id
(*args, **kwargs)
+class paddle.v2.layer.
max_id
A layer for finding the id which has the maximal value for each sample.
The result is stored in output.ids.
The example usage is:
@@ -2723,7 +2698,7 @@ The result is stored in output.ids.
sampling_id
-
-class
paddle.v2.layer.
sampling_id
(*args, **kwargs)
+class paddle.v2.layer.
sampling_id
A layer for sampling id from multinomial distribution from the input layer.
Sampling one id for one sample.
The simple usage is:
@@ -2759,7 +2734,7 @@ Sampling one id for one sample.
pad
-
-class
paddle.v2.layer.
pad
(*args, **kwargs)
+class paddle.v2.layer.
pad
This operation pads zeros to the input data according to pad_c,pad_h
and pad_w. pad_c, pad_h, pad_w specifies the which dimension and size
of padding. And the input data shape is NCHW.
@@ -2828,7 +2803,7 @@ in width dimension.
cross_entropy_cost
-
-class
paddle.v2.layer.
cross_entropy_cost
(*args, **kwargs)
+class paddle.v2.layer.
cross_entropy_cost
A loss layer for multi class entropy.
cost = cross_entropy(input=input,
label=label)
@@ -2866,7 +2841,7 @@ will not be calculated for weight.
cross_entropy_with_selfnorm_cost
-
-class
paddle.v2.layer.
cross_entropy_with_selfnorm_cost
(*args, **kwargs)
+class paddle.v2.layer.
cross_entropy_with_selfnorm_cost
A loss layer for multi class entropy with selfnorm.
Input should be a vector of positive numbers, without normalization.