提交 a6c8ff43 编写于 作者: 刘琦

Merge branch 'docs' into 'master'

Update docs

See merge request !468
......@@ -10,9 +10,6 @@ project = u'MiAI Compute Engine'
author = u'%s Developers' % project
copyright = u'2018, %s' % author
version = u'0.6'
release = u'0.6.0 alpha'
source_parsers = {
'.md': recommonmark.parser.CommonMarkParser,
}
......
......@@ -3,7 +3,7 @@ Adding a new Op
You can create a custom op if it is not supported yet.
To add a custom op, you need to finish the following steps.
To add a custom op, you need to finish the following steps:
Define the Op class
--------------------
......@@ -95,3 +95,7 @@ Add test and benchmark
----------------------
It's strongly recommended to add unit test and micro benchmark for your
new Op. If you wish to contribute back, it's required.
Document the new Op
---------------------
Finally, add an entry in operator table in the document.
......@@ -34,7 +34,8 @@ clang-format -style="{BasedOnStyle: google, \
C++ logging guideline
---------------------
The rule of VLOG level:
VLOG is used for verbose logging, which is configured by environment variable
`MACE_CPP_MIN_VLOG_LEVEL`. The guideline of VLOG level is as follows:
```
0. Ad hoc debug logging, should only be added in test or temporary ad hoc
......
......@@ -5,17 +5,21 @@ CPU runtime memory layout
-------------------------
The CPU tensor buffer is organized in the following order:
+-----------------------------+--------------+
| Tensor type | Buffer |
+=============================+==============+
| Intermediate input/output | NCHW |
+-----------------------------+--------------+
| Convolution Filter | OIHW |
+-----------------------------+--------------+
| Depthwise Convolution Filter| MIHW |
+-----------------------------+--------------+
| 1-D Argument, length = W | W |
+-----------------------------+--------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Buffer
* - Intermediate input/output
- NCHW
* - Convolution Filter
- OIHW
* - Depthwise Convolution Filter
- MIHW
* - 1-D Argument, length = W
- W
OpenCL runtime memory layout
-----------------------------
......@@ -34,66 +38,117 @@ Input/Output Tensor
The Input/Output Tensor is stored in NHWC format:
+---------------------------+--------+----------------------------+-----------------------------+
|Tensor type | Buffer | Image size [width, height] | Explanation |
+===========================+========+============================+=============================+
|Channel-Major Input/Output | NHWC | [W * (C+3)/4, N * H] | Default Input/Output format |
+---------------------------+--------+----------------------------+-----------------------------+
|Height-Major Input/Output | NHWC | [W * C, N * (H+3)/4] | Winograd Convolution format |
+---------------------------+--------+----------------------------+-----------------------------+
|Width-Major Input/Output | NHWC | [(W+3)/4 * C, N * H] | Winograd Convolution format |
+---------------------------+--------+----------------------------+-----------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Buffer
- Image size [width, height]
- Explanation
* - Channel-Major Input/Output
- NHWC
- [W * (C+3)/4, N * H]
- Default Input/Output format
* - Height-Major Input/Output
- NHWC
- [W * C, N * (H+3)/4
- Winograd Convolution format
* - Width-Major Input/Output
- NHWC
- [(W+3)/4 * C, N * H]
- Winograd Convolution format
Each Pixel of **Image** contains 4 elements. The below table list the
coordination relation between **Image** and **Buffer**.
+---------------------------+-------------------------------------------------------------------------+-------------+
|Tensor type | Pixel coordinate relationship | Explanation |
+===========================+=========================================================================+=============+
|Channel-Major Input/Output | P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=i%W, c=[i/W * 4 + k])} | k=[0, 4) |
+---------------------------+-------------------------------------------------------------------------+-------------+
|Height-Major Input/Output | P[i, j] = {E[n, h, w, c] | (n=j%N, h=[j/H*4 + k], w=i%W, c=i/W)} | k=[0, 4) |
+---------------------------+-------------------------------------------------------------------------+-------------+
|Width-Major Input/Output | P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=[i%W*4 + k], c=i/W)} | k=[0, 4) |
+---------------------------+-------------------------------------------------------------------------+-------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Pixel coordinate relationship
- Explanation
* - Channel-Major Input/Output
- P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=i%W, c=[i/W * 4 + k])}
- k=[0, 4)
* - Height-Major Input/Output
- P[i, j] = {E[n, h, w, c] | (n=j%N, h=[j/H*4 + k], w=i%W, c=i/W)}
- k=[0, 4)
* - Width-Major Input/Output
- P[i, j] = {E[n, h, w, c] | (n=j/H, h=j%H, w=[i%W*4 + k], c=i/W)}
- k=[0, 4)
Filter Tensor
~~~~~~~~~~~~~
+----------------------------+------+---------------------------------+------------------------------------------------------------------------------+
| Tensor |Buffer| Image size [width, height] | Explanation |
+============================+======+=================================+==============================================================================+
|Convolution Filter | HWOI | [RoundUp<4>(I), H * W * (O+3)/4]|Convolution filter format,There is no difference compared to [H*w*I, (O+3)/4]|
+----------------------------+------+---------------------------------+------------------------------------------------------------------------------+
|Depthwise Convlution Filter | HWIM | [H * W * M, (I+3)/4] |Depthwise-Convolution filter format |
+----------------------------+------+---------------------------------+------------------------------------------------------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor
- Buffer
- Image size [width, height]
- Explanation
* - Convolution Filter
- HWOI
- [RoundUp<4>(I), H * W * (O+3)/4]
- Convolution filter format,There is no difference compared to [H*w*I, (O+3)/4]
* - Depthwise Convlution Filter
- HWIM
- [H * W * M, (I+3)/4]
- Depthwise-Convolution filter format
Each Pixel of **Image** contains 4 elements. The below table list the
coordination relation between **Image** and **Buffer**.
+----------------------------+-------------------------------------------------------------------+---------------------------------------+
|Tensor type | Pixel coordinate relationship | Explanation |
+============================+===================================================================+=======================================+
|Convolution Filter | P[m, n] = {E[h, w, o, i] &#124; (h=T/W, w=T%W, o=[n/HW*4+k], i=m)}| HW= H * W, T=n%HW, k=[0, 4) |
+----------------------------+-------------------------------------------------------------------+---------------------------------------+
|Depthwise Convlution Filter | P[m, n] = {E[h, w, i, 0] &#124; (h=m/W, w=m%W, i=[n*4+k])} | only support multiplier == 1, k=[0, 4)|
+----------------------------+-------------------------------------------------------------------+---------------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Pixel coordinate relationship
- Explanation
* - Convolution Filter
- P[m, n] = {E[h, w, o, i] | (h=T/W, w=T%W, o=[n/HW*4+k], i=m)}
- HW= H * W, T=n%HW, k=[0, 4)
* - Depthwise Convlution Filter
- P[m, n] = {E[h, w, i, 0] | (h=m/W, w=m%W, i=[n*4+k])}
- only support multiplier == 1, k=[0, 4)
1-D Argument Tensor
~~~~~~~~~~~~~~~~~~~
+----------------+----------+------------------------------+---------------------------------+
| Tensor type | Buffer | Image size [width, height] | Explanation |
+================+==========+==============================+=================================+
| 1-D Argument | W | [(W+3)/4, 1] | 1D argument format, e.g. Bias |
+----------------+----------+------------------------------+---------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Buffer
- Image size [width, height]
- Explanation
* - 1-D Argument
- W
- [(W+3)/4, 1]
- 1D argument format, e.g. Bias
Each Pixel of **Image** contains 4 elements. The below table list the
coordination relation between **Image** and **Buffer**.
+--------------+---------------------------------+-------------+
| Tensor type | Pixel coordinate relationship | Explanation |
+==============+=================================+=============+
|1-D Argument | P[i, 0] = {E[w] &#124; w=i*4+k} | k=[0, 4) |
+--------------+---------------------------------+-------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Tensor type
- Pixel coordinate relationship
- Explanation
* - 1-D Argument
- P[i, 0] = {E[w] | w=i*4+k}
- k=[0, 4)
Frequently Asked Questions
Frequently asked questions
==========================
Why is the generated static library file size so huge?
......
Create a model deployment
=========================
Create a model deployment file
==============================
Each YAML deployment script describes a case of deployments (for example,
a smart camera application may contains face recognition, object recognition,
and voice recognition models, which can be defined in one deployment file),
which will generate one static library (if more than one ABIs specified,
there will be one static library for each). Each YAML scripts can contains one
or more models.
The first step to deploy you models is to create a YAML model deployment
file.
One deployment file describes a case of model deployment,
each file will generate one static library (if more than one ABIs specified,
there will be one static library for each). The deployment file can contains
one or more models, for example, a smart camera application may contains face
recognition, object recognition, and voice recognition models, which can be
defined in one deployment file),
Model deployment file example
-------------------------------
TODO: change to a link to a standalone file with comments.
.. code:: yaml
Example
----------
Here is an deployment file example used by Android demo application.
# 配置文件名会被用作生成库的名称:libmace-${filename}.a
target_abis: [armeabi-v7a, arm64-v8a]
# 具体机型的soc编号,可以使用`adb shell getprop | grep ro.board.platform | cut -d [ -f3 | cut -d ] -f1`获取
target_socs: [msm8998]
embed_model_data: 1
vlog_level: 0
models: # 一个配置文件可以包含多个模型的配置信息,最终生成的库中包含多个模型
first_net: # 模型的标签,在调度模型的时候,会用这个变量
platform: tensorflow
model_file_path: path/to/model64.pb # also support http:// and https://
model_sha256_checksum: 7f7462333406e7dea87222737590ebb7d94490194d2f21a7d72bafa87e64e9f9
input_nodes: input_node
output_nodes: output_node
input_shapes: 1,64,64,3
output_shapes: 1,64,64,2
runtime: gpu
limit_opencl_kernel_time: 0
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
second_net:
platform: caffe
model_file_path: path/to/model.prototxt
weight_file_path: path/to/weight.caffemodel
model_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
weight_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
input_nodes:
- input_node0
- input_node1
output_nodes:
- output_node0
- output_node1
input_shapes:
- 1,256,256,3
- 1,128,128,3
output_shapes:
- 1,256,256,2
- 1,1,1,2
runtime: cpu
limit_opencl_kernel_time: 1
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
TODO: change this example file to the demo deployment file
(reuse the same file) and rename to a reasonable name.
.. literalinclude:: models/demo_app_models.yaml
:language: yaml
Configurations
--------------------
+--------------------------+----------------------------------------------------------------------------------------+
| Configuration key | Description |
+==========================+========================================================================================+
| target_abis | The target ABI to build, can be one or more of 'host', 'armeabi-v7a' or 'arm64-v8a' |
+--------------------------+----------------------------------------------------------------------------------------+
| embed_model_data | Whether embedding model weights as the code, default to 1 |
+--------------------------+----------------------------------------------------------------------------------------+
| platform | The source framework, tensorflow or caffe |
+--------------------------+----------------------------------------------------------------------------------------+
| model_file_path | The path of the model file, can be local or remote |
+--------------------------+----------------------------------------------------------------------------------------+
| weight_file_path | The path of the model weights file, used by Caffe model |
+--------------------------+----------------------------------------------------------------------------------------+
| model_sha256_checksum | The SHA256 checksum of the model file |
+--------------------------+----------------------------------------------------------------------------------------+
| weight_sha256_checksum | The SHA256 checksum of the weight file, used by Caffe model |
+--------------------------+----------------------------------------------------------------------------------------+
| input_nodes | The input node names, one or more strings |
+--------------------------+----------------------------------------------------------------------------------------+
| output_nodes | The output node names, one or more strings |
+--------------------------+----------------------------------------------------------------------------------------+
| input_shapes | The shapes of the input nodes, in NHWC order |
+--------------------------+----------------------------------------------------------------------------------------+
| output_shapes | The shapes of the output nodes, in NHWC order |
+--------------------------+----------------------------------------------------------------------------------------+
| runtime | The running device, one of CPU, GPU or DSP |
+--------------------------+----------------------------------------------------------------------------------------+
| limit_opencl_kernel_time | Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default to 0|
+--------------------------+----------------------------------------------------------------------------------------+
| dsp_mode | Control the DSP precision and performance, default to 0 usually works for most cases |
+--------------------------+----------------------------------------------------------------------------------------+
| obfuscate | Whether to obfuscate the model operator name, default to 0 |
+--------------------------+----------------------------------------------------------------------------------------+
| fast_conv | Whether to enable Winograd convolution, **will increase memory consumption** |
+--------------------------+----------------------------------------------------------------------------------------+
| input_files | Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used |
+--------------------------+----------------------------------------------------------------------------------------+
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Configuration key
- Description
* - target_abis
- The target ABI to build, can be one or more of 'host', 'armeabi-v7a' or 'arm64-v8a'
* - embed_model_data
- Whether embedding model weights as the code, default to 1
* - platform
- The source framework, tensorflow or caffe
* - model_file_path
- The path of the model file, can be local or remote
* - weight_file_path
- The path of the model weights file, used by Caffe model
* - model_sha256_checksum
- The SHA256 checksum of the model file
* - weight_sha256_checksum
- The SHA256 checksum of the weight file, used by Caffe model
* - input_nodes
- The input node names, one or more strings
* - output_nodes
- The output node names, one or more strings
* - input_shapes
- The shapes of the input nodes, in NHWC order
* - output_shapes
- The shapes of the output nodes, in NHWC order
* - runtime
- The running device, one of CPU, GPU or DSP
* - limit_opencl_kernel_time
- Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default to 0
* - dsp_mode
- Control the DSP precision and performance, default to 0 usually works for most cases
* - obfuscate
- Whether to obfuscate the model operator name, default to 0
* - fast_conv
- Whether to enable Winograd convolution, **will increase memory consumption**
* - input_files
- Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used
Docker Images
=============
* Login in [Xiaomi Docker Registry](http://docs.api.xiaomi.net/docker-registry/)
```
docker login cr.d.xiaomi.net
```
* Build with `Dockerfile`
```
docker build -t cr.d.xiaomi.net/mace/mace-dev
```
* Pull image from docker registry
```
docker pull cr.d.xiaomi.net/mace/mace-dev
```
* Create container
```
# Set 'host' network to use ADB
docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash
```
Docker Images
=============
- Login in `Xiaomi Docker
Registry <http://docs.api.xiaomi.net/docker-registry/>`__
``docker login cr.d.xiaomi.net``
- Build with ``Dockerfile``
``docker build -t cr.d.xiaomi.net/mace/mace-dev .``
- Pull image from docker registry
``docker pull cr.d.xiaomi.net/mace/mace-dev``
- Create container
``# Set 'host' network to use ADB docker run -it --rm -v /local/path:/container/path --net=host cr.d.xiaomi.net/mace/mace-dev /bin/bash``
# 配置文件名会被用作生成库的名称:libmace-${filename}.a
target_abis: [armeabi-v7a, arm64-v8a]
# 具体机型的soc编号,可以使用`adb shell getprop | grep ro.board.platform | cut -d [ -f3 | cut -d ] -f1`获取
target_socs: [msm8998]
embed_model_data: 1
models: # 一个配置文件可以包含多个模型的配置信息,最终生成的库中包含多个模型
first_net: # 模型的标签,在调度模型的时候,会用这个变量
platform: tensorflow
model_file_path: path/to/model64.pb # also support http:// and https://
model_sha256_checksum: 7f7462333406e7dea87222737590ebb7d94490194d2f21a7d72bafa87e64e9f9
input_nodes: input_node
output_nodes: output_node
input_shapes: 1,64,64,3
output_shapes: 1,64,64,2
runtime: gpu
limit_opencl_kernel_time: 0
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
second_net:
platform: caffe
model_file_path: path/to/model.prototxt
weight_file_path: path/to/weight.caffemodel
model_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
weight_sha256_checksum: 05d92625809dc9edd6484882335c48c043397aed450a168d75eb8b538e86881a
input_nodes:
- input_node0
- input_node1
output_nodes:
- output_node0
- output_node1
input_shapes:
- 1,256,256,3
- 1,128,128,3
output_shapes:
- 1,256,256,2
- 1,1,1,2
runtime: cpu
limit_opencl_kernel_time: 1
dsp_mode: 0
obfuscate: 1
fast_conv: 0
input_files:
- path/to/input_files # support http://
Operator lists
==============
+----------------------------------+--------------+--------+-------------------------------------------------------+
| Operator | Android NN | Status | Remark |
+==================================+==============+========+=======================================================+
| ADD | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| AVERAGE\_POOL\_2D | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| BATCH\_NORM | | Y | Fusion with activation is supported |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| BIAS\_ADD | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| CHANNEL\_SHUFFLE | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| CONCATENATION | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| CONV\_2D | Y | Y | Fusion with BN and activation layer is supported |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| DEPTHWISE\_CONV\_2D | Y | Y | Only multiplier = 1 is supported; Fusion is supported |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| DEPTH\_TO\_SPACE | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| DEQUANTIZE | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| EMBEDDING\_LOOKUP | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| FLOOR | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| FULLY\_CONNECTED | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| GROUP\_CONV\_2D | | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| HASHTABLE\_LOOKUP | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| L2\_NORMALIZATION | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| L2\_POOL\_2D | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LOCAL\_RESPONSE\_NORMALIZATION | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LOGISTIC | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LSH\_PROJECTION | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| LSTM | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| MATMUL | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| MAX\_POOL\_2D | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| MUL | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| PSROI\_ALIGN | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| PRELU | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELU | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELU1 | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELU6 | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RELUX | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RESHAPE | Y | Y | Limited support |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RESIZE\_BILINEAR | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RNN | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| RPN\_PROPOSAL\_LAYER | | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| SOFTMAX | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| SPACE\_TO\_DEPTH | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| SVDF | Y | | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
| TANH | Y | Y | |
+----------------------------------+--------------+--------+-------------------------------------------------------+
.. Please keep in chronological order when editing
.. csv-table::
:widths: auto
:header: "Operator","Android NN","Supported","Remark"
"ADD","Y","Y",""
"AVERAGE_POOL_2D","Y","Y",""
"BATCH_NORM","","Y","Fusion with activation is supported"
"BIAS_ADD","","Y",""
"CHANNEL_SHUFFLE","","Y",""
"CONCATENATION","Y","Y",""
"CONV_2D","Y","Y","Fusion with BN and activation layer is supported"
"DEPTHWISE_CONV_2D","Y","Y","Only multiplier = 1 is supported; Fusion is supported"
"DEPTH_TO_SPACE","Y","Y",""
"DEQUANTIZE","Y","",""
"EMBEDDING_LOOKUP","Y","",""
"FLOOR","Y","",""
"FULLY_CONNECTED","Y","Y",""
"GROUP_CONV_2D","","",""
"HASHTABLE_LOOKUP","Y","",""
"L2_NORMALIZATION","Y","",""
"L2_POOL_2D","Y","",""
"LOCAL_RESPONSE_NORMALIZATION","Y","Y",""
"LOGISTIC","Y","Y",""
"LSH_PROJECTION","Y","",""
"LSTM","Y","",""
"MATMUL","","Y",""
"MAX_POOL_2D","Y","Y",""
"MUL","Y","",""
"PSROI_ALIGN","","Y",""
"PRELU","","Y",""
"RELU","Y","Y",""
"RELU1","Y","Y",""
"RELU6","Y","Y",""
"RELUX","","Y",""
"RESHAPE","Y","Y","Limited support"
"RESIZE_BILINEAR","Y","Y",""
"RNN","Y","",""
"RPN_PROPOSAL_LAYER","","Y",""
"SOFTMAX","Y","Y",""
"SPACE_TO_DEPTH","Y","Y",""
"SVDF","Y","",""
"TANH","Y","Y",""
......@@ -11,8 +11,8 @@ The main documentation is organized into the following sections:
getting_started/introduction
getting_started/create_a_model_deployment
getting_started/how_to_build
getting_started/docker
getting_started/how_to_build
getting_started/op_lists
.. toctree::
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册