Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MegEngine 天元
Docs
提交
c3de0aa9
D
Docs
项目概览
MegEngine 天元
/
Docs
通知
3
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
Docs
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
c3de0aa9
编写于
7月 08, 2020
作者:
M
Megvii Engine Team
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
feat(doc_zh): add zh doc for v0.5.1
GitOrigin-RevId: cfbc5843c346e1a49e1e58314e35f9924ebef917
上级
953d758f
变更
7
展开全部
隐藏空白更改
内联
并排
Showing
7 changed file
with
1016 addition
and
854 deletion
+1016
-854
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.core.po
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.core.po
+123
-112
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.distributed.po
.../locale/zh_CN/LC_MESSAGES/api_zh/megengine.distributed.po
+37
-41
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.functional.po
...e/locale/zh_CN/LC_MESSAGES/api_zh/megengine.functional.po
+270
-264
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.jit.po
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.jit.po
+71
-31
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.module.po
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.module.po
+333
-305
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.optimizer.po
...ce/locale/zh_CN/LC_MESSAGES/api_zh/megengine.optimizer.po
+1
-1
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.quantization.po
...locale/zh_CN/LC_MESSAGES/api_zh/megengine.quantization.po
+181
-100
未找到文件。
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.core.po
浏览文件 @
c3de0aa9
此差异已折叠。
点击以展开。
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.distributed.po
浏览文件 @
c3de0aa9
...
...
@@ -9,7 +9,7 @@ msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-0
6-15 09:28-07
00\n"
"POT-Creation-Date: 2020-0
7-07 14:11+08
00\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
...
...
@@ -23,40 +23,6 @@ msgid "megengine.distributed package"
msgstr "megengine.distributed package"
#: ../../source/api_zh/megengine.distributed.rst:11
msgid "megengine.distributed.brainpp"
msgstr "megengine.distributed.brainpp"
#: megengine.distributed.brainpp.InitServer:1 of
msgid "Bases: :class:`object`"
msgstr "基类: :class:`object`"
#: megengine.distributed.brainpp.InitServer:1 of
msgid "server for synchronizing master hostname and port"
msgstr "用于同步master主机名和端口的服务器"
#: megengine.distributed.brainpp.ThreadXMLRPCServer:1 of
msgid ""
"Bases: :class:`socketserver.ThreadingMixIn`, "
":class:`xmlrpc.server.SimpleXMLRPCServer`"
msgstr ""
"基类: :class:`socketserver.ThreadingMixIn`, "
":class:`xmlrpc.server.SimpleXMLRPCServer`"
#: megengine.distributed.brainpp.dist_tqdm:1 of
msgid ""
"a wrapper of tqdm, only rank0 prints the progress bar, otherwise multiple "
"stdouts might mix up with each other in rlaunch mode"
msgstr "tqdm的封装,仅rank0下打印进度条,否则在rlaunch模式下多个标准输出可能会互相混淆"
#: megengine.distributed.brainpp.init_connect:1 of
msgid "client for synchronizing master hostname and port"
msgstr "用于同步master主机名和端口的客户端"
#: megengine.distributed.brainpp.launcher:1 of
msgid "get rlaunch environment and synchronize master address"
msgstr "获得rlaunch环境并同步主机地址"
#: ../../source/api_zh/megengine.distributed.rst:19
msgid "megengine.distributed.functional"
msgstr "megengine.distributed.functional"
...
...
@@ -192,7 +158,7 @@ msgstr "创建聚合通信(collective communication)的reduce_scatter_sum算
msgid "Create reduce_sum operator for collective communication"
msgstr "创建聚合通信(collective communication)的reduce_sum算子"
#: ../../source/api_zh/megengine.distributed.rst:
27
#: ../../source/api_zh/megengine.distributed.rst:
19
msgid "megengine.distributed.helper"
msgstr "megengine.distributed.helper"
...
...
@@ -228,7 +194,7 @@ msgstr "输出的计算图,默认使用inp的计算图"
msgid ":py:class:`~megengine._internal.mgb.SymbolVar`"
msgstr ":py:class:`~megengine._internal.mgb.SymbolVar`"
#: ../../source/api_zh/megengine.distributed.rst:
35
#: ../../source/api_zh/megengine.distributed.rst:
27
msgid "megengine.distributed.util"
msgstr "megengine.distributed.util"
...
...
@@ -282,8 +248,8 @@ msgstr "阻止调用,直到组中的所有进程达到这个障碍点(barrie
#: megengine.distributed.util.init_process_group:1 of
msgid ""
"Initialize the distributed process group, and also specify the device
used
"
"in the current process."
"Initialize the distributed process group, and also specify the device "
"
used
in the current process."
msgstr "初始化分布式进程组,并且指定在当前进程中使用的设备。"
#: megengine.distributed.util.init_process_group:4 of
...
...
@@ -320,8 +286,8 @@ msgstr ":py:class:`bool`"
#: megengine.distributed.util.synchronized:1 of
msgid ""
"Decorator. Decorated function will synchronize when finished.
Specifically,
"
"we use this to prevent data race during hub.load"
"Decorator. Decorated function will synchronize when finished. "
"
Specifically,
we use this to prevent data race during hub.load"
msgstr "装饰器(Decorator)。结束后,装饰后的函数会被同步。实际应用上,我们用它来防止hub.load期间的数据竞争"
#~ msgid "rank of the current process, use util.get_rank() as default"
...
...
@@ -329,3 +295,33 @@ msgstr "装饰器(Decorator)。结束后,装饰后的函数会被同步。
#~ msgid "rank of root node, use 0 as default"
#~ msgstr "根节点(根进程)的进程序号(rank),使用0作为默认值"
#~ msgid "megengine.distributed.brainpp"
#~ msgstr "megengine.distributed.brainpp"
#~ msgid "Bases: :class:`object`"
#~ msgstr "基类: :class:`object`"
#~ msgid "server for synchronizing master hostname and port"
#~ msgstr "用于同步master主机名和端口的服务器"
#~ msgid ""
#~ "Bases: :class:`socketserver.ThreadingMixIn`, "
#~ ":class:`xmlrpc.server.SimpleXMLRPCServer`"
#~ msgstr ""
#~ "基类: :class:`socketserver.ThreadingMixIn`, "
#~ ":class:`xmlrpc.server.SimpleXMLRPCServer`"
#~ msgid ""
#~ "a wrapper of tqdm, only rank0 "
#~ "prints the progress bar, otherwise "
#~ "multiple stdouts might mix up with "
#~ "each other in rlaunch mode"
#~ msgstr "tqdm的封装,仅rank0下打印进度条,否则在rlaunch模式下多个标准输出可能会互相混淆"
#~ msgid "client for synchronizing master hostname and port"
#~ msgstr "用于同步master主机名和端口的客户端"
#~ msgid "get rlaunch environment and synchronize master address"
#~ msgstr "获得rlaunch环境并同步主机地址"
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.functional.po
浏览文件 @
c3de0aa9
此差异已折叠。
点击以展开。
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.jit.po
浏览文件 @
c3de0aa9
...
...
@@ -9,7 +9,7 @@ msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-0
6-15 09:28-07
00\n"
"POT-Creation-Date: 2020-0
7-08 14:22+08
00\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
...
...
@@ -29,7 +29,7 @@ msgstr "Bases: :class:`object`"
#: megengine.jit.trace:1 of
msgid "Wrap a callable and provide:"
msgstr "包装一个
callable
对象,并提供以下功能:"
msgstr "包装一个
可调用
对象,并提供以下功能:"
#: megengine.jit.trace:3 of
msgid "tracing via :meth:`.trace` and :meth:`.dump`"
...
...
@@ -78,9 +78,9 @@ msgstr "是否对编译好的函数追溯(trace)进行性能评估(profile
#: megengine.jit.trace.__call__:1 of
msgid ""
"Evaluate on provided arguments, using compiled trace instead of the
original
"
" callable if applicable."
msgstr "对编译好的函数追溯(trace)在给定参数下执行,而不评估原来的
callable
对象(如果适用)。"
"Evaluate on provided arguments, using compiled trace instead of the "
"
original
callable if applicable."
msgstr "对编译好的函数追溯(trace)在给定参数下执行,而不评估原来的
可调用
对象(如果适用)。"
#: megengine.jit.trace.__call__ megengine.jit.trace.get_profile of
msgid "Returns"
...
...
@@ -88,11 +88,11 @@ msgstr "返回"
#: megengine.jit.trace.__call__:4 of
msgid ""
"``None`` or :class:`~.Tensor` or tuple of :class:`~.Tensor`, depending on
"
"the return value of wrapped callable."
"``None`` or :class:`~.Tensor` or tuple of :class:`~.Tensor`, depending on"
"
the return value of wrapped callable."
msgstr ""
"``None`` 或 :class:`~.Tensor` 或 tuple of :class:`~.Tensor` "
"。需要根据包装后的
callable
对象返回的值来确定类型。"
"。需要根据包装后的
可调用
对象返回的值来确定类型。"
#: megengine.jit.trace.dump:1 of
msgid "Serialize trace to file system."
...
...
@@ -103,33 +103,73 @@ msgid "positional only argument. Path of output file."
msgstr "仅作为位置参数(positional only argument)。输出文件所在路径。"
#: megengine.jit.trace.dump:4 of
msgid "names of the input tensors in the traced function"
msgstr "被追溯(traced)函数的输入张量的名字"
msgid "names of the input tensors in the traced function
.
"
msgstr "被追溯(traced)函数的输入张量的名字
。
"
#: megengine.jit.trace.dump:5 of
msgid "whether output is appended to ``fpath``"
msgstr "是否在 ``fpath`` 后追加输出"
msgid "whether output is appended to ``fpath``
.
"
msgstr "是否在 ``fpath`` 后追加输出
。
"
#: megengine.jit.trace.dump:6 of
msgid "whether to enable optimize_for_inference pass before dump."
msgstr "是否在模型转储之前打开 ``optimize_for_inference`` 开关。"
#: megengine.jit.trace.dump:9 of
msgid ""
"whether to use float16 for I/O between oprs and use float32 as internal "
"computation precision. Note the output var would be changed to float16"
"computation precision. Note the output var would be changed to float16
.
"
msgstr "是否使用float16作为算子间I/O的数据精度,同时float32作为内部计算的数据精度。注意输出变量的类型也随之更改为float16。"
#: megengine.jit.trace.dump:9 of
msgid "whether to use float16 for both I/O and computation precision"
msgstr "是否使用float16同时作为算子间I/O和内部计算的数据精度"
#: megengine.jit.trace.dump:12 of
msgid "whether to use float16 for both I/O and computation precision."
msgstr "是否使用float16同时作为算子间I/O和内部计算的数据精度。"
#: megengine.jit.trace.dump:15 of
msgid "whether to use NHWCD4 data layout. This is faster on some OpenCL backend."
msgstr "是否使用NHWCD4数据格式。在某些OpenCL设备上,会提高计算速度。"
#: megengine.jit.trace.dump:11 of
#: megengine.jit.trace.dump:17 of
msgid "whether to use NCHW4 data layout. it currently used in X86 AVX backend."
msgstr "是否使用NCHW88数据格式。当前用于X86 AVX后端。"
#: megengine.jit.trace.dump:19 of
msgid "whether to use NCHW4 data layout. it currently used in arm backend."
msgstr "是否使用NCHW44数据格式。当前用于arm后端。"
#: megengine.jit.trace.dump:21 of
msgid ""
"whether to use NHWCD4 data format. This is faster on some OpenCL devices"
msgstr "是否使用NHWCD4数据格式。在某些OpenCL设备上,会提高计算速度"
"whether to use NCHW4 data layout. it currently used in armv8.2+dotprod "
"backend."
msgstr "是否使用NCHW4_dot数据格式。当前用于armv8.2+dotprod后端。"
#: megengine.jit.trace.dump:23 of
msgid ""
"whether to use NCHW4 data layout. it currently used in nvidia "
"backend(based on cudnn)."
msgstr "是否使用NCHW4数据格式。当前用于nvidia后端(基于cudnn)。"
#: megengine.jit.trace.dump:25 of
msgid ""
"whether to use NCHW32 data layout. it currently used in nvidia backend "
"with tensorcore(based on cudnn)."
msgstr "是否使用NCHW32数据格式。当前与tensorcore用于nvidia后端(基于cudnn)。"
#: megengine.jit.trace.dump:27 of
msgid ""
"whether to use CHWN4 data layout. it currently used in nvidia backend "
"with tensorcore."
msgstr "是否使用CHWN4数据格式。当前与tensorcore用于nvidia后端。"
#: megengine.jit.trace.dump:30 of
msgid "whether to fuse conv+bias+nonlinearty into one opr."
msgstr "是否融合 CONV + bias + 非线性激活成一个算子。"
#: megengine.jit.trace.dump:
13
of
#: megengine.jit.trace.dump:
32
of
msgid ""
"whether to fuse conv+bias+nonlinearty into one opr. This is supported only "
"in NHWCD4 format."
msgstr "是否融合 CONV + bias + 非线性激活子成一个算子。仅在NHWCD4格式中支持。"
"whether to fuse conv_bias with z input for inference on nvidia "
"backend(this optimization pass will result in mismatch of the precision "
"of output of training and inference)"
msgstr "推理阶段是否在nvidia后端对输入z融合 CONV + bias 成一个算子(这个优化会导致训练和推理的输出精度不一致)"
#: megengine.jit.trace.get_profile:1 of
msgid "Get profiling result for compiled trace."
...
...
@@ -141,7 +181,7 @@ msgstr "一个兼容json的对象。"
#: megengine.jit.trace.trace:1 of
msgid "Trace wrapped callable with provided arguments."
msgstr "使用提供的参数,追溯(trace)包装后的
callable对象
"
msgstr "使用提供的参数,追溯(trace)包装后的
可调用对象。
"
#: ../../source/api_zh/megengine.jit.rst:11
msgid "megengine.jit.sublinear\\_memory\\_config"
...
...
@@ -166,8 +206,8 @@ msgid ""
"Default: 0. It can also be set through the environmental variable "
"'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER'."
msgstr ""
"使用遗传算法寻找最优切分策略时的迭代轮数。默认:0。也可以通过环境变量
'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER'
"
"进行设置。"
"使用遗传算法寻找最优切分策略时的迭代轮数。默认:0。也可以通过环境变量 "
"
'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER'
进行设置。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:12 of
msgid ""
...
...
@@ -181,9 +221,9 @@ msgstr ""
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:16 of
msgid ""
"memory lower bound of bottleneck size in MB for sublinear memory "
"optimization. It can be used to perform manual tradeoff between memory
and
"
"
speed. Default: 0. It can also be set through the environmental variable
"
"'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB'."
"optimization. It can be used to perform manual tradeoff between memory "
"
and speed. Default: 0. It can also be set through the environmental
"
"
variable
'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB'."
msgstr ""
"次线性内存优化瓶颈大小的下界(以MB为单位)。它可用于在内存和速度之间进行手动权衡。默认:0。也可以通过设置环境变量 "
"'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB' 来实现。"
...
...
@@ -191,8 +231,8 @@ msgstr ""
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:20 of
msgid ""
"number of thread workers to search the optimum checkpoints in sublinear "
"memory optimization. Default: half of cpu number in the system. Note: the
"
"value must be greater or equal to one. It can also be set through the "
"memory optimization. Default: half of cpu number in the system. Note: the"
"
value must be greater or equal to one. It can also be set through the "
"environmental variable 'MGB_SUBLINEAR_MEMORY_WORKERS'."
msgstr ""
"搜索次线性内存优化最优切分策略时使用的线程数。默认:当前系统中CPU数目的一半。注意:该参数值需要大于等于1。也可以通过设置环境变量 "
...
...
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.module.po
浏览文件 @
c3de0aa9
此差异已折叠。
点击以展开。
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.optimizer.po
浏览文件 @
c3de0aa9
...
...
@@ -192,7 +192,7 @@ msgstr "计算当前调度器(scheduler)的学习率。"
#: megengine.optimizer.lr_scheduler.LRScheduler.load_state_dict:1
#: megengine.optimizer.multi_step_lr.MultiStepLR.load_state_dict:1 of
msgid "Loads the schedulers state."
msgstr "加载调度器(scheduler)的状态"
msgstr "加载调度器(scheduler)的状态
。
"
#: megengine.optimizer.lr_scheduler.LRScheduler.load_state_dict:3
#: megengine.optimizer.multi_step_lr.MultiStepLR.load_state_dict:3 of
...
...
source/locale/zh_CN/LC_MESSAGES/api_zh/megengine.quantization.po
浏览文件 @
c3de0aa9
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录