提交 c3de0aa9 编写于 作者: M Megvii Engine Team

feat(doc_zh): add zh doc for v0.5.1

GitOrigin-RevId: cfbc5843c346e1a49e1e58314e35f9924ebef917
上级 953d758f
......@@ -9,7 +9,7 @@ msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"POT-Creation-Date: 2020-07-07 14:11+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
......@@ -23,40 +23,6 @@ msgid "megengine.distributed package"
msgstr "megengine.distributed package"
#: ../../source/api_zh/megengine.distributed.rst:11
msgid "megengine.distributed.brainpp"
msgstr "megengine.distributed.brainpp"
#: megengine.distributed.brainpp.InitServer:1 of
msgid "Bases: :class:`object`"
msgstr "基类: :class:`object`"
#: megengine.distributed.brainpp.InitServer:1 of
msgid "server for synchronizing master hostname and port"
msgstr "用于同步master主机名和端口的服务器"
#: megengine.distributed.brainpp.ThreadXMLRPCServer:1 of
msgid ""
"Bases: :class:`socketserver.ThreadingMixIn`, "
":class:`xmlrpc.server.SimpleXMLRPCServer`"
msgstr ""
"基类: :class:`socketserver.ThreadingMixIn`, "
":class:`xmlrpc.server.SimpleXMLRPCServer`"
#: megengine.distributed.brainpp.dist_tqdm:1 of
msgid ""
"a wrapper of tqdm, only rank0 prints the progress bar, otherwise multiple "
"stdouts might mix up with each other in rlaunch mode"
msgstr "tqdm的封装,仅rank0下打印进度条,否则在rlaunch模式下多个标准输出可能会互相混淆"
#: megengine.distributed.brainpp.init_connect:1 of
msgid "client for synchronizing master hostname and port"
msgstr "用于同步master主机名和端口的客户端"
#: megengine.distributed.brainpp.launcher:1 of
msgid "get rlaunch environment and synchronize master address"
msgstr "获得rlaunch环境并同步主机地址"
#: ../../source/api_zh/megengine.distributed.rst:19
msgid "megengine.distributed.functional"
msgstr "megengine.distributed.functional"
......@@ -192,7 +158,7 @@ msgstr "创建聚合通信(collective communication)的reduce_scatter_sum算
msgid "Create reduce_sum operator for collective communication"
msgstr "创建聚合通信(collective communication)的reduce_sum算子"
#: ../../source/api_zh/megengine.distributed.rst:27
#: ../../source/api_zh/megengine.distributed.rst:19
msgid "megengine.distributed.helper"
msgstr "megengine.distributed.helper"
......@@ -228,7 +194,7 @@ msgstr "输出的计算图,默认使用inp的计算图"
msgid ":py:class:`~megengine._internal.mgb.SymbolVar`"
msgstr ":py:class:`~megengine._internal.mgb.SymbolVar`"
#: ../../source/api_zh/megengine.distributed.rst:35
#: ../../source/api_zh/megengine.distributed.rst:27
msgid "megengine.distributed.util"
msgstr "megengine.distributed.util"
......@@ -282,8 +248,8 @@ msgstr "阻止调用,直到组中的所有进程达到这个障碍点(barrie
#: megengine.distributed.util.init_process_group:1 of
msgid ""
"Initialize the distributed process group, and also specify the device used "
"in the current process."
"Initialize the distributed process group, and also specify the device "
"used in the current process."
msgstr "初始化分布式进程组,并且指定在当前进程中使用的设备。"
#: megengine.distributed.util.init_process_group:4 of
......@@ -320,8 +286,8 @@ msgstr ":py:class:`bool`"
#: megengine.distributed.util.synchronized:1 of
msgid ""
"Decorator. Decorated function will synchronize when finished. Specifically, "
"we use this to prevent data race during hub.load"
"Decorator. Decorated function will synchronize when finished. "
"Specifically, we use this to prevent data race during hub.load"
msgstr "装饰器(Decorator)。结束后,装饰后的函数会被同步。实际应用上,我们用它来防止hub.load期间的数据竞争"
#~ msgid "rank of the current process, use util.get_rank() as default"
......@@ -329,3 +295,33 @@ msgstr "装饰器(Decorator)。结束后,装饰后的函数会被同步。
#~ msgid "rank of root node, use 0 as default"
#~ msgstr "根节点(根进程)的进程序号(rank),使用0作为默认值"
#~ msgid "megengine.distributed.brainpp"
#~ msgstr "megengine.distributed.brainpp"
#~ msgid "Bases: :class:`object`"
#~ msgstr "基类: :class:`object`"
#~ msgid "server for synchronizing master hostname and port"
#~ msgstr "用于同步master主机名和端口的服务器"
#~ msgid ""
#~ "Bases: :class:`socketserver.ThreadingMixIn`, "
#~ ":class:`xmlrpc.server.SimpleXMLRPCServer`"
#~ msgstr ""
#~ "基类: :class:`socketserver.ThreadingMixIn`, "
#~ ":class:`xmlrpc.server.SimpleXMLRPCServer`"
#~ msgid ""
#~ "a wrapper of tqdm, only rank0 "
#~ "prints the progress bar, otherwise "
#~ "multiple stdouts might mix up with "
#~ "each other in rlaunch mode"
#~ msgstr "tqdm的封装,仅rank0下打印进度条,否则在rlaunch模式下多个标准输出可能会互相混淆"
#~ msgid "client for synchronizing master hostname and port"
#~ msgstr "用于同步master主机名和端口的客户端"
#~ msgid "get rlaunch environment and synchronize master address"
#~ msgstr "获得rlaunch环境并同步主机地址"
......@@ -9,7 +9,7 @@ msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"POT-Creation-Date: 2020-07-08 14:22+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
......@@ -29,7 +29,7 @@ msgstr "Bases: :class:`object`"
#: megengine.jit.trace:1 of
msgid "Wrap a callable and provide:"
msgstr "包装一个callable对象,并提供以下功能:"
msgstr "包装一个可调用对象,并提供以下功能:"
#: megengine.jit.trace:3 of
msgid "tracing via :meth:`.trace` and :meth:`.dump`"
......@@ -78,9 +78,9 @@ msgstr "是否对编译好的函数追溯(trace)进行性能评估(profile
#: megengine.jit.trace.__call__:1 of
msgid ""
"Evaluate on provided arguments, using compiled trace instead of the original"
" callable if applicable."
msgstr "对编译好的函数追溯(trace)在给定参数下执行,而不评估原来的callable对象(如果适用)。"
"Evaluate on provided arguments, using compiled trace instead of the "
"original callable if applicable."
msgstr "对编译好的函数追溯(trace)在给定参数下执行,而不评估原来的可调用对象(如果适用)。"
#: megengine.jit.trace.__call__ megengine.jit.trace.get_profile of
msgid "Returns"
......@@ -88,11 +88,11 @@ msgstr "返回"
#: megengine.jit.trace.__call__:4 of
msgid ""
"``None`` or :class:`~.Tensor` or tuple of :class:`~.Tensor`, depending on "
"the return value of wrapped callable."
"``None`` or :class:`~.Tensor` or tuple of :class:`~.Tensor`, depending on"
" the return value of wrapped callable."
msgstr ""
"``None`` 或 :class:`~.Tensor` 或 tuple of :class:`~.Tensor` "
"。需要根据包装后的callable对象返回的值来确定类型。"
"。需要根据包装后的可调用对象返回的值来确定类型。"
#: megengine.jit.trace.dump:1 of
msgid "Serialize trace to file system."
......@@ -103,33 +103,73 @@ msgid "positional only argument. Path of output file."
msgstr "仅作为位置参数(positional only argument)。输出文件所在路径。"
#: megengine.jit.trace.dump:4 of
msgid "names of the input tensors in the traced function"
msgstr "被追溯(traced)函数的输入张量的名字"
msgid "names of the input tensors in the traced function."
msgstr "被追溯(traced)函数的输入张量的名字"
#: megengine.jit.trace.dump:5 of
msgid "whether output is appended to ``fpath``"
msgstr "是否在 ``fpath`` 后追加输出"
msgid "whether output is appended to ``fpath``."
msgstr "是否在 ``fpath`` 后追加输出"
#: megengine.jit.trace.dump:6 of
msgid "whether to enable optimize_for_inference pass before dump."
msgstr "是否在模型转储之前打开 ``optimize_for_inference`` 开关。"
#: megengine.jit.trace.dump:9 of
msgid ""
"whether to use float16 for I/O between oprs and use float32 as internal "
"computation precision. Note the output var would be changed to float16"
"computation precision. Note the output var would be changed to float16."
msgstr "是否使用float16作为算子间I/O的数据精度,同时float32作为内部计算的数据精度。注意输出变量的类型也随之更改为float16。"
#: megengine.jit.trace.dump:9 of
msgid "whether to use float16 for both I/O and computation precision"
msgstr "是否使用float16同时作为算子间I/O和内部计算的数据精度"
#: megengine.jit.trace.dump:12 of
msgid "whether to use float16 for both I/O and computation precision."
msgstr "是否使用float16同时作为算子间I/O和内部计算的数据精度。"
#: megengine.jit.trace.dump:15 of
msgid "whether to use NHWCD4 data layout. This is faster on some OpenCL backend."
msgstr "是否使用NHWCD4数据格式。在某些OpenCL设备上,会提高计算速度。"
#: megengine.jit.trace.dump:11 of
#: megengine.jit.trace.dump:17 of
msgid "whether to use NCHW4 data layout. it currently used in X86 AVX backend."
msgstr "是否使用NCHW88数据格式。当前用于X86 AVX后端。"
#: megengine.jit.trace.dump:19 of
msgid "whether to use NCHW4 data layout. it currently used in arm backend."
msgstr "是否使用NCHW44数据格式。当前用于arm后端。"
#: megengine.jit.trace.dump:21 of
msgid ""
"whether to use NHWCD4 data format. This is faster on some OpenCL devices"
msgstr "是否使用NHWCD4数据格式。在某些OpenCL设备上,会提高计算速度"
"whether to use NCHW4 data layout. it currently used in armv8.2+dotprod "
"backend."
msgstr "是否使用NCHW4_dot数据格式。当前用于armv8.2+dotprod后端。"
#: megengine.jit.trace.dump:23 of
msgid ""
"whether to use NCHW4 data layout. it currently used in nvidia "
"backend(based on cudnn)."
msgstr "是否使用NCHW4数据格式。当前用于nvidia后端(基于cudnn)。"
#: megengine.jit.trace.dump:25 of
msgid ""
"whether to use NCHW32 data layout. it currently used in nvidia backend "
"with tensorcore(based on cudnn)."
msgstr "是否使用NCHW32数据格式。当前与tensorcore用于nvidia后端(基于cudnn)。"
#: megengine.jit.trace.dump:27 of
msgid ""
"whether to use CHWN4 data layout. it currently used in nvidia backend "
"with tensorcore."
msgstr "是否使用CHWN4数据格式。当前与tensorcore用于nvidia后端。"
#: megengine.jit.trace.dump:30 of
msgid "whether to fuse conv+bias+nonlinearty into one opr."
msgstr "是否融合 CONV + bias + 非线性激活成一个算子。"
#: megengine.jit.trace.dump:13 of
#: megengine.jit.trace.dump:32 of
msgid ""
"whether to fuse conv+bias+nonlinearty into one opr. This is supported only "
"in NHWCD4 format."
msgstr "是否融合 CONV + bias + 非线性激活子成一个算子。仅在NHWCD4格式中支持。"
"whether to fuse conv_bias with z input for inference on nvidia "
"backend(this optimization pass will result in mismatch of the precision "
"of output of training and inference)"
msgstr "推理阶段是否在nvidia后端对输入z融合 CONV + bias 成一个算子(这个优化会导致训练和推理的输出精度不一致)"
#: megengine.jit.trace.get_profile:1 of
msgid "Get profiling result for compiled trace."
......@@ -141,7 +181,7 @@ msgstr "一个兼容json的对象。"
#: megengine.jit.trace.trace:1 of
msgid "Trace wrapped callable with provided arguments."
msgstr "使用提供的参数,追溯(trace)包装后的callable对象"
msgstr "使用提供的参数,追溯(trace)包装后的可调用对象。"
#: ../../source/api_zh/megengine.jit.rst:11
msgid "megengine.jit.sublinear\\_memory\\_config"
......@@ -166,8 +206,8 @@ msgid ""
"Default: 0. It can also be set through the environmental variable "
"'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER'."
msgstr ""
"使用遗传算法寻找最优切分策略时的迭代轮数。默认:0。也可以通过环境变量 'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER' "
"进行设置。"
"使用遗传算法寻找最优切分策略时的迭代轮数。默认:0。也可以通过环境变量 "
"'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER' 进行设置。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:12 of
msgid ""
......@@ -181,9 +221,9 @@ msgstr ""
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:16 of
msgid ""
"memory lower bound of bottleneck size in MB for sublinear memory "
"optimization. It can be used to perform manual tradeoff between memory and "
"speed. Default: 0. It can also be set through the environmental variable "
"'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB'."
"optimization. It can be used to perform manual tradeoff between memory "
"and speed. Default: 0. It can also be set through the environmental "
"variable 'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB'."
msgstr ""
"次线性内存优化瓶颈大小的下界(以MB为单位)。它可用于在内存和速度之间进行手动权衡。默认:0。也可以通过设置环境变量 "
"'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB' 来实现。"
......@@ -191,8 +231,8 @@ msgstr ""
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:20 of
msgid ""
"number of thread workers to search the optimum checkpoints in sublinear "
"memory optimization. Default: half of cpu number in the system. Note: the "
"value must be greater or equal to one. It can also be set through the "
"memory optimization. Default: half of cpu number in the system. Note: the"
" value must be greater or equal to one. It can also be set through the "
"environmental variable 'MGB_SUBLINEAR_MEMORY_WORKERS'."
msgstr ""
"搜索次线性内存优化最优切分策略时使用的线程数。默认:当前系统中CPU数目的一半。注意:该参数值需要大于等于1。也可以通过设置环境变量 "
......
......@@ -192,7 +192,7 @@ msgstr "计算当前调度器(scheduler)的学习率。"
#: megengine.optimizer.lr_scheduler.LRScheduler.load_state_dict:1
#: megengine.optimizer.multi_step_lr.MultiStepLR.load_state_dict:1 of
msgid "Loads the schedulers state."
msgstr "加载调度器(scheduler)的状态"
msgstr "加载调度器(scheduler)的状态"
#: megengine.optimizer.lr_scheduler.LRScheduler.load_state_dict:3
#: megengine.optimizer.multi_step_lr.MultiStepLR.load_state_dict:3 of
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册