提交 5126f122 编写于 作者: T Travis CI

Deploy to GitHub Pages: d0599511

上级 a942011c
......@@ -579,6 +579,65 @@
"comment" : "Only used in cudnn kernel. workspace size for cudnn, in MB, workspace is a section of GPU memory which will be allocated/freed each time the operator runs, larger workspace size can increase performance but also requires better hardware. This size should be chosen carefully.",
"generated" : 0
} ]
},{
"type" : "depthwise_conv2d",
"comment" : "\nConvolution Operator.\n\nThe convolution operation calculates the output based on the input, filter\nand strides, paddings, dilations, groups parameters. The size of each dimension of the\nparameters is checked in the infer-shape.\nInput(Input) and Output(Output) are in NCHW format. Where N is batch\nsize, C is the number of channels, H is the height of the feature, and W is\nthe width of the feature.\nFilters(Input) is MCHW format. Where M is the number of output image channels, C is\nthe number of input image channels, H is the height of the filter, and W\nis the width of the filter.\nParameters(strides, paddings, dilations) are two elements. These two elements represent\nheight and width, respectively.\nThe input(X) size and output(Out) size may be different.\n\nExample:\n Input:\n Input shape: $(N, C_{in}, H_{in}, W_{in})$\n Filter shape: $(C_{out}, C_{in}, H_f, W_f)$\n Output:\n Output shape: $(N, C_{out}, H_{out}, W_{out})$\n Where\n$$\n H_{out}= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]}+ 1 \\\\\n W_{out}= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]}+ 1\n$$\n",
"inputs" : [
{
"name" : "Input",
"comment" : "(Tensor) The input tensor of convolution operator. The format of input tensor is NCHW, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature.",
"duplicable" : 0,
"intermediate" : 0
}, {
"name" : "Filter",
"comment" : "(Tensor) The filter tensor of convolution operator. The format of the filter tensor is MCHW, where M is the number of output image channels, C is the number of input image channels, H is the height of the filter, and W is the width of the filter. If the groups attribute is greater than 1, C equals the number of input image channels divided by the groups.",
"duplicable" : 0,
"intermediate" : 0
} ],
"outputs" : [
{
"name" : "Output",
"comment" : "(Tensor) The output tensor of convolution operator. The format of output tensor is also NCHW.",
"duplicable" : 0,
"intermediate" : 0
} ],
"attrs" : [
{
"name" : "strides",
"type" : "int array",
"comment" : "(vector<int> default:{1, 1}), the strides(h_stride, w_stride) of convolution operator.",
"generated" : 0
}, {
"name" : "paddings",
"type" : "int array",
"comment" : "(vector<int> default:{0, 0}), the paddings(h_pad, w_pad) of convolution operator.",
"generated" : 0
}, {
"name" : "groups",
"type" : "int",
"comment" : "(int default:1), the groups number of the convolution operator. According to grouped convolution in Alex Krizhevsky's Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels.",
"generated" : 0
}, {
"name" : "dilations",
"type" : "int array",
"comment" : "(vector<int> default:{1, 1}), the dilations(h_dilation, w_dilation) of convolution operator.",
"generated" : 0
}, {
"name" : "use_cudnn",
"type" : "bool",
"comment" : "(bool, default false) Only used in cudnn kernel, need install cudnn",
"generated" : 0
}, {
"name" : "data_format",
"type" : "string",
"comment" : "(string, default NCHW) Only used in An optional string from: \"NHWC\", \"NCHW\". Defaults to \"NHWC\". Specify the data format of the output data, the input will be transformed automatically. ",
"generated" : 0
}, {
"name" : "workspace_size_MB",
"type" : "int",
"comment" : "Only used in cudnn kernel. Need set use_cudnn to true.workspace size for cudnn, in MB, workspace is a section of GPU memory which will be allocated/freed each time the operator runs, larger workspace size can increase performance but also requires better hardware. This size should be chosen carefully.",
"generated" : 0
} ]
},{
"type" : "conv2d",
"comment" : "\nConvolution Operator.\n\nThe convolution operation calculates the output based on the input, filter\nand strides, paddings, dilations, groups parameters. The size of each dimension of the\nparameters is checked in the infer-shape.\nInput(Input) and Output(Output) are in NCHW format. Where N is batch\nsize, C is the number of channels, H is the height of the feature, and W is\nthe width of the feature.\nFilters(Input) is MCHW format. Where M is the number of output image channels, C is\nthe number of input image channels, H is the height of the filter, and W\nis the width of the filter.\nParameters(strides, paddings, dilations) are two elements. These two elements represent\nheight and width, respectively.\nThe input(X) size and output(Out) size may be different.\n\nExample:\n Input:\n Input shape: $(N, C_{in}, H_{in}, W_{in})$\n Filter shape: $(C_{out}, C_{in}, H_f, W_f)$\n Output:\n Output shape: $(N, C_{out}, H_{out}, W_{out})$\n Where\n$$\n H_{out}= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]}+ 1 \\\\\n W_{out}= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]}+ 1\n$$\n",
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册