提交 8ae1db2e 编写于 作者: T Travis CI

Deploy to GitHub Pages: 59bf85d9

上级 11f24b09
......@@ -1703,6 +1703,110 @@
"comment" : "(bool, default false) If true, output a scalar reduced along all dimensions.",
"generated" : 0
} ]
},{
"type" : "round",
"comment" : "\nRound Activation Operator.\n\n$out = [x]$\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "Input of Round operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "Output of Round operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "norm",
"comment" : "\n \"Input shape: $(N, C, H, W)$\n Scale shape: $(C, 1)$\n Output shape: $(N, C, H, W)$\n Where\n forward\n $$\n [\\frac {x_{1}}{\\sqrt{\\sum{x_{i}^{2}}}} \\frac {x_{2}}{\\sqrt{\\sum{x_{i}^{2}}}} \\frac {x_{3}}{\\sqrt{\\sum{x_{i}^{2}}}} \\cdot \\cdot \\cdot \\frac {x_{n}}{\\sqrt{\\sum{x_{i}^{2}}}}]\n $$\n backward\n $$\n \\frac{\\frac{\\mathrm{d}L }{\\mathrm{d}y_{1}} - \\frac {x_{1}\\sum {\\frac{\\mathrm{d} L}{\\mathrm{d} y_{j}}}x_{j}}{\\sum x_{j}^{2}} }{\\sqrt{\\sum{x_{j}^{2}}}}\n $$\n ",
"inputs" : [
{
"name" : "X",
"comment" : "(Tensor) The input tensor of norm operator. The format of input tensor is NCHW. Where N is batch size, C is the number of channels, H and W is the height and width of feature.",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Scale",
"comment" : "(Tensor) The input tensor of norm operator. The format of input tensor is C * 1.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "(Tensor) The output tensor of norm operator.N * M.M = C * H * W",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [
{
"name" : "epsilon",
"type" : "float",
"comment" : "(float, default 1e-10) Constant for numerical stability.",
"generated" : 0
} ]
},{
"type" : "modified_huber_loss",
"comment" : "\nModified Huber Loss Operator.\n\nThis operator is used in binary classification problem. The shape of\ninput X and target Y are both [N, 1] and so is the shape of the output loss.\nSince target Y is not differentiable, calculating gradient for Y is illegal.\nThe formula of modified huber loss is:\n\n$$\nL(y, f(x)) = \n\\begin{cases}\n(\\max(0, 1 - yf(x)))^2, \\text{if} \\ yf(x) >= -1 \\\\\n -4yf(x), \\quad \\text{otherwise}\n\\end{cases}\n$$\n\nMake sure the values of target label Y are in {0, 1} here. This operator will\nscale values of Y to {-1, +1} when computing losses and gradients.\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "The input tensor of modified huber loss op. X is 2-D tensor with shape [batch_size, 1].",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Y",
"comment" : "The target labels of modified huber loss op. The shape of Y is the same as X. Values of Y must be 0 or 1.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "IntermediateVal",
"comment" : "Variable to save intermediate result which will be reused in backward processing.",
"duplicable" : 0,
"intermediate" : 1
}, {
"name" : "Out",
"comment" : "Classification loss for X.",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "elementwise_sub",
"comment" : "\nLimited Elementwise Sub Operator.\n\nThe equation is:\n\n$Out = X - Y$\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nexample:\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nBoth the input X and Y can carry the LoD (Level of Details) information,\nor not. But the output only shares the LoD information with input X.\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "The output of elementwise op",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [
{
"name" : "axis",
"type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X",
"generated" : 0
} ]
},{
"type" : "rnn_memory_helper",
"comment" : "",
......@@ -2229,35 +2333,6 @@
"comment" : "(float, default 1.0) The step size by which the input tensor will be incremented.",
"generated" : 0
} ]
},{
"type" : "log_loss",
"comment" : "\nLogLoss Operator.\n\nLog loss is a loss function used for binary classification. Log Loss quantifies\nthe accuracy of a classifier by penalising false classifications. Minimising the\nLog Loss is equivalent to maximising the accuracy of the classifier. We define\nPredicted as the values predicted by our model and Labels as the target ground\ntruth value. Log loss can evaluate how close the predicted values are to the\ntarget. The shapes of Predicted and Labels are both [batch_size, 1].\nThe equation is:\n\n$$\nLoss = - Labels * log(Predicted + \\epsilon) -\n (1 - Labels) * log(1 - Predicted + \\epsilon)\n$$\n\n",
"inputs" : [
{
"name" : "Predicted",
"comment" : "The input value (Predicted) of Log loss op.Predicted is a 2-D tensor with shape [batch_size, 1].",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Labels",
"comment" : "The target value (Labels) of Log loss op.Labels is a 2-D tensor with shape [batch_size, 1].",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Loss",
"comment" : "The output tensor with shape [batch_size, 1] which represents the log loss.",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [
{
"name" : "epsilon",
"type" : "float",
"comment" : "Epsilon in log loss.",
"generated" : 0
} ]
},{
"type" : "unpool",
"comment" : "\nInput shape is: $(N, C_{in}, H_{in}, W_{in})$, Output shape is:\n$(N, C_{out}, H_{out}, W_{out})$, where\n$$\nH_{out} = (H_{in}−1) * strides[0] − 2 * paddings[0] + ksize[0] \\\\\nW_{out} = (W_{in}−1) * strides[1] − 2 * paddings[1] + ksize[1]\n$$\nPaper: http://www.matthewzeiler.com/wp-content/uploads/2017/07/iccv2011.pdf\n",
......@@ -2807,6 +2882,30 @@
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "sequence_erase",
"comment" : "\nSequence Erase Operator.\n\nSequence erase operator erases tokens specified by Attr(tokens) from the input \nsequences Input(X), and outputs the remaining data and modifies the LoD \ninformation at the same time. For example, given a 2-D LoDTensor\n\n X = [[2, 2, 6, 1, 3, 9, 6, 1, 0, 1]]^T\n\nwith lod = [[0, 3, 6, 10]], there are three sequences in the input:\n \n X1 = [[2, 2, 6]]^T, X2 = [[1, 3, 9]]^T and X3 = [[6, 1, 0, 1]]^T.\n\nIf the tokens to be erased are Attr(tokens) = [2, 3, 5], after the erasing \noperation, the three sequences become\n\n X1' = [[6]]^T, X2' = [[1, 9]]^T and X3' = [[6, 1, 0, 1]]^T.\n\nHence the LoDTensor Output(Out) should be\n\n Out = [[6, 1, 9, 6, 1, 0, 1]]^T,\n\nwith lod = [[0, 1, 3, 7]].\n\nAn example usage for this operator is to remove the special tokens when \ncomputing the edit distance between two strings, such as blank, start token, \nand end token.\n",
"inputs" : [
{
"name" : "X",
"comment" : "(2-D LoDTensor with the 2nd dim. equal to 1) Input LoDTensor of SequenceEraseOp.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "(2-D LoDTensor with the 2nd dim. equal to 1) Output LoDTensor of SequenceEraseOp.",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [
{
"name" : "tokens",
"type" : "int array",
"comment" : "(vector<int>) Tokens need to be erased from input sequences.",
"generated" : 0
} ]
},{
"type" : "scale",
"comment" : "\nScale operator\n\n$$Out = scale*X$$\n",
......@@ -3132,24 +3231,24 @@
"generated" : 0
} ]
},{
"type" : "norm",
"comment" : "\n \"Input shape: $(N, C, H, W)$\n Scale shape: $(C, 1)$\n Output shape: $(N, C, H, W)$\n Where\n forward\n $$\n [\\frac {x_{1}}{\\sqrt{\\sum{x_{i}^{2}}}} \\frac {x_{2}}{\\sqrt{\\sum{x_{i}^{2}}}} \\frac {x_{3}}{\\sqrt{\\sum{x_{i}^{2}}}} \\cdot \\cdot \\cdot \\frac {x_{n}}{\\sqrt{\\sum{x_{i}^{2}}}}]\n $$\n backward\n $$\n \\frac{\\frac{\\mathrm{d}L }{\\mathrm{d}y_{1}} - \\frac {x_{1}\\sum {\\frac{\\mathrm{d} L}{\\mathrm{d} y_{j}}}x_{j}}{\\sum x_{j}^{2}} }{\\sqrt{\\sum{x_{j}^{2}}}}\n $$\n ",
"type" : "log_loss",
"comment" : "\nLogLoss Operator.\n\nLog loss is a loss function used for binary classification. Log Loss quantifies\nthe accuracy of a classifier by penalising false classifications. Minimising the\nLog Loss is equivalent to maximising the accuracy of the classifier. We define\nPredicted as the values predicted by our model and Labels as the target ground\ntruth value. Log loss can evaluate how close the predicted values are to the\ntarget. The shapes of Predicted and Labels are both [batch_size, 1].\nThe equation is:\n\n$$\nLoss = - Labels * log(Predicted + \\epsilon) -\n (1 - Labels) * log(1 - Predicted + \\epsilon)\n$$\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "(Tensor) The input tensor of norm operator. The format of input tensor is NCHW. Where N is batch size, C is the number of channels, H and W is the height and width of feature.",
"name" : "Predicted",
"comment" : "The input value (Predicted) of Log loss op.Predicted is a 2-D tensor with shape [batch_size, 1].",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Scale",
"comment" : "(Tensor) The input tensor of norm operator. The format of input tensor is C * 1.",
"name" : "Labels",
"comment" : "The target value (Labels) of Log loss op.Labels is a 2-D tensor with shape [batch_size, 1].",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "(Tensor) The output tensor of norm operator.N * M.M = C * H * W",
"name" : "Loss",
"comment" : "The output tensor with shape [batch_size, 1] which represents the log loss.",
"duplicable" : 0,
"intermediate" : 0
} ],
......@@ -3157,64 +3256,7 @@
{
"name" : "epsilon",
"type" : "float",
"comment" : "(float, default 1e-10) Constant for numerical stability.",
"generated" : 0
} ]
},{
"type" : "modified_huber_loss",
"comment" : "\nModified Huber Loss Operator.\n\nThis operator is used in binary classification problem. The shape of\ninput X and target Y are both [N, 1] and so is the shape of the output loss.\nSince target Y is not differentiable, calculating gradient for Y is illegal.\nThe formula of modified huber loss is:\n\n$$\nL(y, f(x)) = \n\\begin{cases}\n(\\max(0, 1 - yf(x)))^2, \\text{if} \\ yf(x) >= -1 \\\\\n -4yf(x), \\quad \\text{otherwise}\n\\end{cases}\n$$\n\nMake sure the values of target label Y are in {0, 1} here. This operator will\nscale values of Y to {-1, +1} when computing losses and gradients.\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "The input tensor of modified huber loss op. X is 2-D tensor with shape [batch_size, 1].",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Y",
"comment" : "The target labels of modified huber loss op. The shape of Y is the same as X. Values of Y must be 0 or 1.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "IntermediateVal",
"comment" : "Variable to save intermediate result which will be reused in backward processing.",
"duplicable" : 0,
"intermediate" : 1
}, {
"name" : "Out",
"comment" : "Classification loss for X.",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
},{
"type" : "elementwise_sub",
"comment" : "\nLimited Elementwise Sub Operator.\n\nThe equation is:\n\n$Out = X - Y$\n\nX is a tensor of any dimension and the dimensions of tensor Y must be smaller than\nor equal to the dimensions of X. \n\nThere are two cases for this operator:\n1. The shape of Y is same with X;\n2. The shape of Y is a subset of X.\n\nFor case 2:\nY will be broadcasted to match the shape of X and axis should be \nthe starting dimension index for broadcasting Y onto X.\n\nexample:\n shape(X) = (2, 3, 4, 5), shape(Y) = (,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (5,)\n shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)\n shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1\n shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0\n\nBoth the input X and Y can carry the LoD (Level of Details) information,\nor not. But the output only shares the LoD information with input X.\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "(Tensor) The first input tensor of elementwise op",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Y",
"comment" : "(Tensor) The second input tensor of elementwise op",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "The output of elementwise op",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [
{
"name" : "axis",
"type" : "int",
"comment" : "(int, default -1) The starting dimension index for broadcasting Y onto X",
"comment" : "Epsilon in log loss.",
"generated" : 0
} ]
},{
......@@ -5604,22 +5646,4 @@
"comment" : "(float, default -0.5f) Learning Rate Power.",
"generated" : 0
} ]
},{
"type" : "round",
"comment" : "\nRound Activation Operator.\n\n$out = [x]$\n\n",
"inputs" : [
{
"name" : "X",
"comment" : "Input of Round operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Out",
"comment" : "Output of Round operator",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [ ]
}]
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册