提交 2b79cd65 编写于 作者: M Megvii Engine Team

add 0.5.0 API translation po file

GitOrigin-RevId: 1f27af8ac5da3a410de21ce4814b33188e2b10d1
上级 c8d1cee9
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/advanced/deployment.rst:4
msgid "模型部署"
msgstr ""
#: ../../source/advanced/deployment.rst:6
msgid ""
"MegEngine 的一大核心优势是“训练推理一体化”,其中“训练”是在 Python 环境中进行的,而“推理”则特指在 C++ "
"环境下使用训练完成的模型进行推理。而将模型迁移到无需依赖 Python 的环境中,使其能正常进行推理计算,被称为 **部署** "
"。部署的目的是简化除了模型推理所必需的一切其它依赖,使推理计算的耗时变得尽可能少,比如手机人脸识别场景下会需求毫秒级的优化,而这必须依赖于 C++"
" 环境才能实现。"
msgstr ""
#: ../../source/advanced/deployment.rst:8
msgid ""
"本章从一个训练好的异或网络模型(见 `MegStudio 项目 <https://studio.brainpp.com/public-"
"project/53>`_ )出发,讲解如何将其部署到 CPU(X86)环境下运行。主要分为以下步骤:"
msgstr ""
#: ../../source/advanced/deployment.rst:10
msgid "将模型序列化并导出到文件;"
msgstr ""
#: ../../source/advanced/deployment.rst:11
msgid "编写读取模型的 C++ 脚本;"
msgstr ""
#: ../../source/advanced/deployment.rst:12
msgid "编译 C++ 脚本成可执行文件。"
msgstr ""
#: ../../source/advanced/deployment.rst:15
msgid "模型序列化"
msgstr ""
#: ../../source/advanced/deployment.rst:17
msgid ""
"为了将模型进行部署,首先我们需要使模型不依赖于 Python 环境,这一步称作 **序列化** 。序列化只支持静态图,这是因为“剥离” "
"Python 环境的操作需要网络结构是确定不可变的,而这依赖于静态图模式下的编译操作(详情见 "
":ref:`dynamic_and_static_graph` ),另外编译本身对计算图的优化也是部署的必要步骤。"
msgstr ""
#: ../../source/advanced/deployment.rst:19
msgid "在 MegEngine 中,序列化对应的接口为 :meth:`~.trace.dump` ,对于一个训练好的网络模型,我们使用以下代码来将其序列化:"
msgstr ""
#: ../../source/advanced/deployment.rst:40
msgid ""
"这里再解释一下编译与序列化相关的一些操作。编译会将被 :class:`~.trace` 装饰的函数(这里的 ``pred_fun`` "
")视为计算图的全部流程,计算图的输入严格等于 ``pred_fun`` 的位置参数(positional arguments,即参数列表中星号 "
"``*`` 前的部分,这里的 ``data`` 变量),计算图的输出严格等于函数的返回值(这里的 ``pred_normalized`` "
")。而这也会进一步影响到部署时模型的输入和输出,即如果运行部署后的该模型,会需要一个 ``data`` 格式的输入,返回一个 "
"``pred_normalized`` 格式的值。"
msgstr ""
#: ../../source/advanced/deployment.rst:42
msgid ""
"为了便于我们在 C++ 代码中给序列化之后的模型传入输入数据,我们需要给输入赋予一个名字,即代码中的 ``arg_names`` "
"参数。由于该示例中 ``pred_fun`` 只有一个位置参数,即计算图只有一个输入,所以传给 ``arg_names`` "
"的列表也只需一个字符串值即可,可以是任意名字,用于在 C++ 代码中引用,详情见下节内容。"
msgstr ""
#: ../../source/advanced/deployment.rst:44
msgid ""
"总结一下,我们对在静态图模式下训练得到的模型,可以使用 :meth:`~.trace.dump` "
"方法直接序列化,而无需对模型代码做出任何修改,这就是“训练推理一体化”的由来。"
msgstr ""
#: ../../source/advanced/deployment.rst:47
msgid "编写 C++ 程序读取模型"
msgstr ""
#: ../../source/advanced/deployment.rst:49
msgid ""
"接下来我们需要编写一个 C++ "
"程序,来实现我们期望在部署平台上完成的功能。在这里我们基于上面导出的异或网络模型,实现一个最简单的功能,即给定两个浮点数,输出对其做异或操作,结果为"
" 0 的概率以及为 1 的概率。"
msgstr ""
#: ../../source/advanced/deployment.rst:51
msgid ""
"在此之前,为了能够正常使用 MegEngine 底层 C++ 接口,需要先按照 :ref:`installation` 从源码编译安装 "
"MegEngine,并执行 ``make install`` 保证 MegEngine 相关 C++ 文件被正确安装。"
msgstr ""
#: ../../source/advanced/deployment.rst:53
msgid ""
"实现上述异或计算的示例 C++ 代码如下(引自 `xor-deploy.cpp "
"<https://github.com/MegEngine/MegEngine/blob/master/sdk/xor-deploy/xor-"
"deploy.cpp>`_ ):"
msgstr ""
#: ../../source/advanced/deployment.rst:58
msgid ""
"简单解释一下代码的意思,我们首先通过 ``serialization::GraphLoader`` 将模型加载进来,接着通过 "
"``tensor_map`` 和上节指定的输入名称 ``data`` ,找到模型的输入指针,再将运行时提供的输入 ``x`` 和 ``y`` "
"赋值给输入指针,然后我们使用 ``network.graph->compile`` 将模型编译成一个函数接口,并调用执行,最后将得到的结果 "
"``predict`` 进行输出,该输出的两个值即为异或结果为 0 的概率以及为 1 的概率 。"
msgstr ""
#: ../../source/advanced/deployment.rst:61
msgid "编译并执行"
msgstr ""
#: ../../source/advanced/deployment.rst:63
msgid ""
"为了更完整地实现“训练推理一体化”,我们还需要支持同一个 C++ "
"程序能够交叉编译到不同平台上执行,而不需要修改代码。之所以能够实现不同平台一套代码,是由于底层依赖的算子库(内部称作 "
"MegDNN)实现了对不同平台接口的封装,在编译时会自动根据指定的目标平台选择兼容的接口。"
msgstr ""
#: ../../source/advanced/deployment.rst:67
msgid "目前发布的版本我们开放了对 CPU(X86、X64)和 GPU(CUDA)平台的支持,后续会继续开放对 ARM 平台的支持。"
msgstr ""
#: ../../source/advanced/deployment.rst:69
msgid "我们在这里以 CPU 平台为例,直接使用 gcc 或者 clang (用 ``$CXX`` 指代)进行编译即可:"
msgstr ""
#: ../../source/advanced/deployment.rst:75
msgid ""
"上面的 ``$MGE_INSTALL_PATH`` 指代了编译安装时通过 ``CMAKE_INSTALL_PREFIX`` "
"指定的安装路径。编译完成之后,通过以下命令执行即可:"
msgstr ""
#: ../../source/advanced/deployment.rst:81
msgid ""
"这里将 ``$MGE_INSTALL_PATH`` 加进 ``LD_LIBRARY_PATH`` 环境变量,确保 MegEngine "
"库可以被编译器找到。上面命令对应的输出如下:"
msgstr ""
#: ../../source/advanced/deployment.rst:87
msgid "至此我们便完成了从 Python 模型到 C++ 可执行文件的部署流程。"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/advanced/distributed.rst:4
msgid "分布式训练"
msgstr ""
#: ../../source/advanced/distributed.rst:6
msgid ""
"本章我们将介绍如何在 MegEngine 中高效地利用多GPU进行分布式训练。分布式训练是指同时利用一台或者多台机器上的 GPU "
"进行并行计算。在深度学习领域,最常见的并行计算方式是在数据层面进行的,即每个 GPU "
"各自负责一部分数据,并需要跑通整个训练和推理流程。这种方式叫做 **数据并行** 。"
msgstr ""
#: ../../source/advanced/distributed.rst:8
msgid "目前 MegEngine 开放的接口支持单机多卡和多机多卡的数据并行方式。"
msgstr ""
#: ../../source/advanced/distributed.rst:11
msgid "单机多卡"
msgstr ""
#: ../../source/advanced/distributed.rst:13
msgid "单机多卡是最为常用的方式,比如单机四卡、单机八卡,足以支持我们完成大部分模型的训练。我们本节按照以下顺序进行介绍:"
msgstr ""
#: ../../source/advanced/distributed.rst:15
msgid "多进程间的通信机制"
msgstr ""
#: ../../source/advanced/distributed.rst:16
msgid "如何初始化分布式训练"
msgstr ""
#: ../../source/advanced/distributed.rst:17
#: ../../source/advanced/distributed.rst:72
msgid "数据处理流程"
msgstr ""
#: ../../source/advanced/distributed.rst:18
msgid "进程间训练状态如何同步"
msgstr ""
#: ../../source/advanced/distributed.rst:19
msgid "如何在多进程环境中将模型保存与加载"
msgstr ""
#: ../../source/advanced/distributed.rst:22
msgid "通信机制简介"
msgstr ""
#: ../../source/advanced/distributed.rst:24
msgid ""
"在 MegEngine 中,对多 GPU 的管理基于 Python 自带的多进程库 :py:mod:`~.multiprocessing` "
"。假设一台机器上有 8 张显卡,那么我们需要通过 :py:class:`.multiprocessing.Process` 创建 8 "
"个进程,与显卡一一对应。而为了能让这 8 个各自独立的进程能一同进行模型训练,我们需要管理它们之间的通信。"
msgstr ""
#: ../../source/advanced/distributed.rst:26
msgid ""
"首先我们会给每个进程分配一个进程序号(rank),从 0 到 7,作为每个进程的身份标识。通过 "
":py:class:`.multiprocessing.Process` 的 ``target`` "
"参数指明所有进程需要执行的目标函数,同时在函数参数中指明每个进程自己的序号,从而使得所有进程执行同一段代码却能分工合作,完成不重复的任务,如下代码所示:"
msgstr ""
#: ../../source/advanced/distributed.rst:40
msgid ""
"除了让每个进程能分辨各自的身份,我们还需要指定一个通信的接口,在 MegEngine 中我们采用的是 IP "
"地址和端口号的方式。在多机多卡中,由于存在多台机器,我们需要事先指定一台机器为主节点(master node),将其 IP "
"地址和用于通信的端口号提供给所有机器,让所有机器都可以访问该主节点,从而进行通信;而在单机多卡中,我们只需设置主节点为本机地址 "
"``localhost`` 即可。"
msgstr ""
#: ../../source/advanced/distributed.rst:42
msgid "有了身份识别机制和通信方式,整个通信机制就基本完整了。"
msgstr ""
#: ../../source/advanced/distributed.rst:45
msgid "初始化分布式训练"
msgstr ""
#: ../../source/advanced/distributed.rst:47
msgid "在 MegEngine 中,我们通过 :func:`~.init_process_group` 来初始化分布式训练。其接收以下参数"
msgstr ""
#: ../../source/advanced/distributed.rst:49
msgid "``master_ip`` (str) – 主节点的 IP 地址;"
msgstr ""
#: ../../source/advanced/distributed.rst:50
msgid "``master_port`` (int) – 所有进程通信使用的端口;"
msgstr ""
#: ../../source/advanced/distributed.rst:51
msgid "``world_size`` (int) – 总共有多少进程参与该计算;"
msgstr ""
#: ../../source/advanced/distributed.rst:52
msgid "``rank`` (int) – 当前进程的序号;"
msgstr ""
#: ../../source/advanced/distributed.rst:53
msgid "``dev`` (int) - 当前进程绑定的 GPU 设备在本机器上的 ID。"
msgstr ""
#: ../../source/advanced/distributed.rst:55
msgid "我们只需在每个进程执行的目标函数中,调用该接口,并传入与每个进程匹配的参数,即可开启多进程间的通信。如下代码所示:"
msgstr ""
#: ../../source/advanced/distributed.rst:74
msgid "在初始化分布式训练环境之后,我们便可以按照正常的流程进行训练了,但是由于需要每个进程处理不同的数据,我们还需要在数据部分做一些额外的操作。"
msgstr ""
#: ../../source/advanced/distributed.rst:76
msgid ""
"在这里我们以载入 MNIST "
"数据为例,展示如何对数据做切分,使得每个进程拿到不重叠的数据。此处我们将整个数据集载入内存后再进行切分。这种方式比较低效,仅作为原理示意,更加高效的方式见"
" :ref:`dist_dataloader` 。"
msgstr ""
#: ../../source/advanced/distributed.rst:89
msgid "至此我们便得到了每个进程各自负责的、互不重叠的数据部分。"
msgstr ""
#: ../../source/advanced/distributed.rst:92
msgid "训练状态同步"
msgstr ""
#: ../../source/advanced/distributed.rst:94
msgid ""
"在目标函数中每个进程的训练流程与单机单卡的训练并没有差异。之所以可以这样,是因为 MegEngine 将多进程间参数状态的同步隐藏在了 "
":class:`~.Optimizer` 中。"
msgstr ""
#: ../../source/advanced/distributed.rst:96
msgid ""
"具体来说, :class:`~.Optimizer` 通过 :func:`~.util.is_distributed` "
"得知当前处于分布式训练状态,会在构造函数和 :meth:`~.Optimizer.step` 中自动完成多进程间参数的同步,即调用 "
":func:`~.distributed.functional.bcast_param` 。"
msgstr ""
#: ../../source/advanced/distributed.rst:98
msgid ""
"所以每个进程在执行训练代码阶段,定义 :class:`~.Optimizer` 以及每个迭代中调用 "
":meth:`~.Optimizer.step` "
"修改参数值时,都会自动广播自己进程当时的参数值,实现所有进程在开始训练时以及每轮迭代之后的训练状态是统一的。"
msgstr ""
#: ../../source/advanced/distributed.rst:101
msgid "模型保存与加载"
msgstr ""
#: ../../source/advanced/distributed.rst:103
msgid "在 MegEngine 中,依赖于上面提到的状态同步机制,我们保持了各个进程状态的一致,使得可以很容易地实现模型的保存和加载。"
msgstr ""
#: ../../source/advanced/distributed.rst:105
msgid ""
"具体来说,由于我们在定义优化器时会进行参数同步,所以我们只需在定义优化器之前,在主进程(rank 0 "
"进程)中加载模型参数,那么其它进程便会被自动更新为加载后的参数。"
msgstr ""
#: ../../source/advanced/distributed.rst:107
msgid "同理,保存参数只需要在每个迭代执行完 :meth:`~.Optimizer.step` 之后进行,也能保证此时保存的状态是所有进程相同的。"
msgstr ""
#: ../../source/advanced/distributed.rst:109
msgid "可以参考以下示例代码实现:"
msgstr ""
#: ../../source/advanced/distributed.rst:132
msgid "使用 DataLoader 进行数据加载"
msgstr ""
#: ../../source/advanced/distributed.rst:134
msgid ""
"在上一节,为了简单起见,我们将整个数据集全部载入内存,实际中,我们可以通过 :class:`~.dataloader.DataLoader` "
"来更高效地加载数据。关于 :class:`~.dataloader.DataLoader` 的基本用法可以参考基础学习的 "
":ref:`data_load` 部分。"
msgstr ""
#: ../../source/advanced/distributed.rst:136
msgid ""
":class:`~.dataloader.DataLoader` "
"会自动帮我们处理分布式训练时数据相关的问题,可以实现使用单卡训练时一样的数据加载代码,具体来说:"
msgstr ""
#: ../../source/advanced/distributed.rst:138
msgid "所有采样器 :class:`~.sampler.Sampler` 都会自动地做类似上文中数据切分的操作,使得所有进程都能获取互不重复的数据。"
msgstr ""
#: ../../source/advanced/distributed.rst:139
msgid ""
"每个进程的 :class:`~.dataloader.DataLoader` "
"还会自动调用分布式相关接口实现内存共享,避免不必要的内存占用,从而显著加速数据读取。"
msgstr ""
#: ../../source/advanced/distributed.rst:141
msgid ""
"总结一下,在分布式训练时,你无需对使用 :class:`~.dataloader.DataLoader` "
"的方式进行任何修改,一切都能无缝地切换。完整的例子见 `MegEngine/models "
"<https://github.com/MegEngine/models/blob/master/official/vision/classification/resnet/train.py>`_"
" 。"
msgstr ""
#: ../../source/advanced/distributed.rst:144
msgid "多机多卡"
msgstr ""
#: ../../source/advanced/distributed.rst:146
msgid ""
"在 MegEngine 中,我们能很方便地将上面单机多卡的代码修改为多机多卡,只需修改传给 "
":func:`~.init_process_group` 的总共进程数目 ``world_size`` 和当前进程序号 ``rank`` "
"参数。即只需在计算每台机器中每个进程的序号时,考虑到机器节点 ID ( ``node_id`` "
")即可。另外选择其中一台机器作为主节点(master node),将其 IP 地址和通信端口提供给所有机器即可。"
msgstr ""
#: ../../source/advanced/distributed.rst:148
msgid "首先需要修改目标函数传入的参数:"
msgstr ""
#: ../../source/advanced/distributed.rst:150
msgid "新增 ``num_nodes`` :表示总共有多少机器;"
msgstr ""
#: ../../source/advanced/distributed.rst:151
msgid "新增 ``node_id`` :表示当前机器的 ID;"
msgstr ""
#: ../../source/advanced/distributed.rst:152
msgid "``num_devices`` -> ``devs_per_node`` :表示每个机器上拥有的 GPU 数量;"
msgstr ""
#: ../../source/advanced/distributed.rst:153
msgid "``rank`` -> ``local_rank`` :表示当前进程在当前机器上的序号;"
msgstr ""
#: ../../source/advanced/distributed.rst:154
msgid "``server`` -> ``master_ip`` :从原先的本机地址(localhost)变为主节点的内网 IP 地址;"
msgstr ""
#: ../../source/advanced/distributed.rst:155
msgid "``port`` -> ``master_port`` :表示主节点用于通信的端口;"
msgstr ""
#: ../../source/advanced/distributed.rst:157
msgid "然后需要计算得到全局的进程序号(global_rank),代码如下所示:"
msgstr ""
#: ../../source/advanced/distributed.rst:169
msgid "其它部分与单机版本完全相同。最终只需在每个机器上执行相同的 Python 程序,即可实现多机多卡的分布式训练。"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/advanced/index.rst:4
msgid "引言"
msgstr ""
#: ../../source/advanced/index.rst:6
msgid "在这部分,您将了解 MegEngine 的一些高级用法。"
msgstr ""
#: ../../source/advanced/index.rst:8
msgid "为了学习这部分内容,您需要掌握 :ref:`基础学习 <basic>` 内容。"
msgstr ""
#: ../../source/advanced/index.rst:10
msgid "这部分共包含四个小节,彼此相对独立,您可以根据个人兴趣和需求进行选择性阅读。"
msgstr ""
#: ../../source/advanced/index.rst:12
msgid ":ref:`distributed` :介绍如何进行分布式训练模型。"
msgstr ""
#: ../../source/advanced/index.rst:14
msgid ":ref:`parameter_more_setting` :介绍更加细粒度的参数优化设置方法。"
msgstr ""
#: ../../source/advanced/index.rst:16
msgid ":ref:`sublinear` :介绍 MegEngine 的亚线性内存优化技术。"
msgstr ""
#: ../../source/advanced/index.rst:18
msgid ":ref:`two_static_mode` :介绍 MegEngine 中静态图的两种模式。"
msgstr ""
#: ../../source/advanced/index.rst:20
msgid ":ref:`deployment` :介绍如何将 MegEngine 模型在 C++ 环境下运行。"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/advanced/load_pytorch.rst:4
msgid "在 MegEngine 中嵌入 PyTorch 子图(Experimental)"
msgstr ""
#: ../../source/advanced/load_pytorch.rst:6
msgid ""
"MegEngine 支持在网络搭建过程中嵌入 PyTorch 模块。 该功能可以方便用户轻松地将已有的 PyTorch 模块移植到 "
"MegEngine 框架中使用。"
msgstr ""
#: ../../source/advanced/load_pytorch.rst:9
msgid "安装本章节所需的 Python 库"
msgstr ""
#: ../../source/advanced/load_pytorch.rst:15
msgid ""
"对于一个已有的 PyTorch 模块,我们可以利用 MegEngine 中提供的 :class:`~.pytorch.PyTorchModule`"
" 将它包裹(wrap)成与 MegEngine :class:`~.Module` 兼容的模块。"
msgstr ""
#: ../../source/advanced/load_pytorch.rst:17
msgid ""
"为了方便演示,假设有一个现成的基于 PyTorch 实现的特征提取模块 ``LeNetExtractor`` (不包含 LeNet "
"网络结构中的分类层)。在 MegEngine 框架中,我们将这个 PyTorch 模块包裹,只需额外实现一层线性分类器,即可完成 LeNet "
"网络的搭建。"
msgstr ""
#: ../../source/advanced/load_pytorch.rst:19
msgid "代码如下:"
msgstr ""
#: ../../source/advanced/load_pytorch.rst:47
msgid "基于 PyTorch 的 LeNetExtractor 代码如下:"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/advanced/parameter_more_setting.rst:4
msgid "更细粒度的参数优化设置"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:6
msgid "在 :ref:`train_and_evaluation` 中网络使用如下优化器进行训练:"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:16
msgid "这个优化器对所有参数都使用同一学习速率进行优化,而在本章中我们将介绍如何做到对不同的参数采用不同的学习速率。"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:18
msgid ""
"本章我们沿用 :ref:`network_build` 中创建的 ``LeNet`` ,下述的优化器相关代码可以用于取代 "
":ref:`train_and_evaluation` 中对应的代码。"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:22
msgid "不同参数使用不同的学习速率"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:24
msgid ""
":class:`~.Optimizer` 支持将网络的参数进行分组,不同的参数组可以采用不同的学习速率进行训练。 "
"一个参数组由一个字典表示,这个字典中必然有键值对: ``'params': param_list`` ,用来指定参数组包含的参数。该字典还可以包含"
" ``'lr':learning_rate`` "
"来指定此参数组的学习速率。此键值对有时可省略,省略后参数组的学习速率由优化器指定。所有待优化参数组的字典会组成一个列表作为 "
":class:`~.Optimizer` 实例化时的第一个参数传入。"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:26
msgid ""
"为了更好的说明参数组,我们首先使用 :class:`~.Module` 提供的 :meth:`~.Module.named_parameters`"
" 函数来对网络参数进行分组。这个函数返回一个包含网络所有参数并且以参数名字为键、参数变量为值的字典:"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:46
msgid "根据参数的名字我们可以将 ``LeNet`` 中所有卷积的参数分为一组,所有全连接层的参数分为另一组:"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:59
msgid "分组后即可根据下述代码对不同参数组设置不同的学习速率:"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:74
msgid "优化器中设置的参数组列表对应于 :attr:`~.Optimizer.param_groups` 属性。我们可以通过其获取不同参数组的学习速率。"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:89
msgid "训练中对学习速率的更改"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:91
msgid ""
"MegEngine "
"也支持在训练过程中对学习速率进行修改,比如部分参数训练到一定程度后就不再需要优化,此时将对应参数组的学习速率设为零即可。我们修改 "
":ref:`train_and_evaluation` "
"中的训练代码进行示例说明。修改后的训练代码总共训练四个epoch,我们会在第二个epoch结束时将所有全连接层参数的学习速率置零,并在每个epoch当中输出"
" ``LeNet`` 中全连接层的部分参数值以显示是否被更新。"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:131
msgid "从输出可以看到在学习速率设为0之前参数值是在不断更新的,但是在设为0之后参数值就不再变化。"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:133
msgid "同时多数网络在训练当中会不断减小学习速率,如下代码展示了 MegEnging 是如何在训练过程中线性减小学习速率的:"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:147
msgid "固定部分参数不优化"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:149
msgid ""
"除了将不训练的参数分为一组并将学习速率设为零外,MegEngine "
"还提供了其他途径来固定参数不进行优化:仅将需要优化的参数与优化器绑定即可。如下代码所示,我们仅对 ``LeNet`` 中的卷积参数进行优化:"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:166
msgid "下述代码将上面的设置加入到了具体训练当中,能够更加直观的看到各个参数的梯度差异:"
msgstr ""
#: ../../source/advanced/parameter_more_setting.rst:210
msgid "从输出可以看到除了卷积参数有梯度外其余参数均没有梯度也就不会更新。"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/advanced/sublinear.rst:4
msgid "亚线性内存优化"
msgstr ""
#: ../../source/advanced/sublinear.rst:6
msgid ""
"使用大 batch size 通常能够提升深度学习模型性能。然而,我们经常遇到的困境是有限的 GPU 内存资源无法满足大 batch size "
"模型训练。为了缓解这一问题, MegEngine 提供了亚线性内存 ( sublinear memory ) "
"优化技术用于降低网络训练的内存占用量。该技术基于 `gradient checkpointing "
"<https://arxiv.org/abs/1604.06174>`_ 算法,通过事先搜索最优的计算图节点作为前向传播和反向传播检查点( "
"checkpoints ),省去其它中间结果存储,大幅节约了内(显)存使用。"
msgstr ""
#: ../../source/advanced/sublinear.rst:8
msgid "用户通过如下的环境变量设置开启亚线性内存优化:"
msgstr ""
#: ../../source/advanced/sublinear.rst:22
msgid ""
"亚线性内存技术仅适用于 MegEngine 静态图模式。这种内存优化方式在编译计算图和训练模型时会有少量的额外时间开销。下面我们以 "
"`ResNet50 <https://arxiv.org/abs/1512.03385>`_ "
"为例,说明使用亚线性内存优化能够大幅节约网络训练显存使用。"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/advanced/two_static_mode.rst:4
msgid "静态图的两种模式"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:5
msgid ""
"在前面的 :ref:`dynamic_and_static_graph` 中,我们介绍了静态图的优点,以及如何使用 "
":class:`~.trace` 功能实现动静态图的转换。本节中,我们进一步介绍静态图的两种模式。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:7
msgid "使用 :class:`~.trace` 装饰一个训练(或者测试)函数时,可以指定 ``symbolic`` 参数,示例代码如下:"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:15
msgid "``symbolic`` 的取值为True或者False,其含义如下:"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:17
msgid ""
"True 表示“静态构造”或者“根据符号构造”。此时,计算图中的所有数据节点(即张量)被视为符号(即 "
"``symbolic``)。它们仅仅作为占位符(placeholder),不产生实际的内存分配,也没有实际的值。此时计算图的编译过程完全取决于计算图的结构,而不取决于张量的具体值,是真正的“静态”。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:19
msgid ""
"False 表示“动态构造”或者“根据值构造”。此时,被 :class:`~.trace` "
"装饰的函数在第一次被调用时,会根据输入的数据执行一次计算,这次计算会构建出一个动态图。然后,这个动态图会被编译为一个静态图。此后,该函数的所有调用都会运行这个静态图,而不再依赖调用时输入的值。此种模式可以视为“动态构建第一次,此后静态运行”。"
" **MegEngine 默认使用此模式。** 这也是PyTorch中的 trace 功能所采用的模式。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:21
msgid "下面我们通过示例代码说明两种模式下构图过程的区别。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:36
msgid "输出为:"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:42
msgid ""
"如上所示,当 ``symbolic=True`` 时,网络的输出 Tensor 并未被赋值。如果我们将 ``symbolic`` 改为 "
"False,重新执行上面的代码将得到:"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:48
msgid "可以看到,此时网络的输出 Tensor 是有结果值的。也就说,计算图确实被构造和执行了。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:50
msgid "在绝大部分情况下,两种模式下构造出的静态图并没有区别,使用中也没有分别。然而,它们有一些细微的区别需要注意。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:52
msgid ""
"``symbolic=False`` "
"的模式下,由于第一次运行和构建计算图的过程依赖于输入,这提供了一定的“动态灵活性”。根据第一次运行时信息的不同,可以构建出不同的静态图。这种灵活性是"
" ``symbolic=True`` "
"的模式无法提供的。例如,可以在网络搭建中写诸如“如果条件满足,则执行分支1,否则执行分支2”的语句。注意,如果这样的条件语句在循环中,那么在循环的第一次执行中构造出的静态图将固定不再改变,即使在循环的后续执行中,该条件语句的结果发生了变化。这是容易造成问题和误解的地方。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:54
msgid ""
"``symbolic=False`` "
"的模式的一个缺点是,由于第一次的运行在动态图模式下,无法利用静态图的内存优化,通常会耗费更大的内存。这可能导致本来在静态图模式下可以运行的网络,在第一次运行时由于内存不够而失败。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:56
msgid ""
"与之相对,``symbolic=True`` "
"的模式具有静态图完全的优点和缺点:始终高效,但缺乏灵活性。如果网络中包含了需要运行时动态信息才能计算的条件语句,该模式将会失败。"
msgstr ""
#: ../../source/advanced/two_static_mode.rst:58
msgid "具体应用中,用户需要根据情况灵活选择使用哪种模式。"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/api.rst:4
msgid "API Reference"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/api_zh/megengine.distributed.rst:2
msgid "megengine.distributed package"
msgstr "megengine.distributed package"
#: ../../source/api_zh/megengine.distributed.rst:11
msgid "megengine.distributed.brainpp"
msgstr "megengine.distributed.brainpp"
#: megengine.distributed.brainpp.InitServer:1 of
msgid "Bases: :class:`object`"
msgstr "基类: :class:`object`"
#: megengine.distributed.brainpp.InitServer:1 of
msgid "server for synchronizing master hostname and port"
msgstr "用于同步master主机名和端口的服务器"
#: megengine.distributed.brainpp.ThreadXMLRPCServer:1 of
msgid ""
"Bases: :class:`socketserver.ThreadingMixIn`, "
":class:`xmlrpc.server.SimpleXMLRPCServer`"
msgstr ""
"基类: :class:`socketserver.ThreadingMixIn`, "
":class:`xmlrpc.server.SimpleXMLRPCServer`"
#: megengine.distributed.brainpp.dist_tqdm:1 of
msgid ""
"a wrapper of tqdm, only rank0 prints the progress bar, otherwise multiple "
"stdouts might mix up with each other in rlaunch mode"
msgstr "tqdm的封装,仅rank0下打印进度条,否则在rlaunch模式下多个标准输出可能会互相混淆"
#: megengine.distributed.brainpp.init_connect:1 of
msgid "client for synchronizing master hostname and port"
msgstr "用于同步master主机名和端口的客户端"
#: megengine.distributed.brainpp.launcher:1 of
msgid "get rlaunch environment and synchronize master address"
msgstr "获得rlaunch环境并同步主机地址"
#: ../../source/api_zh/megengine.distributed.rst:19
msgid "megengine.distributed.functional"
msgstr "megengine.distributed.functional"
#: megengine.distributed.functional.all_gather:1 of
msgid "Create all_gather operator for collective communication"
msgstr "创建 all_gather 算子,用于聚合通信(collective communication)"
#: megengine.distributed.functional.all_gather
#: megengine.distributed.functional.all_reduce_max
#: megengine.distributed.functional.all_reduce_min
#: megengine.distributed.functional.all_reduce_sum
#: megengine.distributed.functional.bcast_param
#: megengine.distributed.functional.broadcast
#: megengine.distributed.functional.reduce_scatter_sum
#: megengine.distributed.functional.reduce_sum
#: megengine.distributed.helper.collective_comm_symvar
#: megengine.distributed.util.init_process_group of
msgid "Parameters"
msgstr "参数"
#: megengine.distributed.functional.all_gather:4
#: megengine.distributed.functional.all_reduce_max:4
#: megengine.distributed.functional.all_reduce_min:4
#: megengine.distributed.functional.all_reduce_sum:4
#: megengine.distributed.functional.broadcast:4
#: megengine.distributed.functional.reduce_scatter_sum:4
#: megengine.distributed.functional.reduce_sum:4 of
msgid "input tensor"
msgstr "输入张量"
#: megengine.distributed.functional.all_gather:6
#: megengine.distributed.functional.all_reduce_max:6
#: megengine.distributed.functional.all_reduce_min:6
#: megengine.distributed.functional.all_reduce_sum:6
#: megengine.distributed.functional.bcast_param:6
#: megengine.distributed.functional.broadcast:6
#: megengine.distributed.functional.reduce_scatter_sum:6
#: megengine.distributed.functional.reduce_sum:6
#: megengine.distributed.helper.collective_comm_symvar:6 of
msgid "unique identifier for collective communication"
msgstr "聚合通信的唯一标识符"
#: megengine.distributed.functional.all_gather:8
#: megengine.distributed.functional.all_reduce_max:8
#: megengine.distributed.functional.all_reduce_min:8
#: megengine.distributed.functional.all_reduce_sum:8
#: megengine.distributed.functional.bcast_param:8
#: megengine.distributed.functional.broadcast:8
#: megengine.distributed.functional.reduce_scatter_sum:8
#: megengine.distributed.functional.reduce_sum:8
#: megengine.distributed.helper.collective_comm_symvar:10 of
msgid "number of ranks, use util.get_world_size() as default"
msgstr "进程数目, 默认使用 util.get_world_size() "
#: megengine.distributed.functional.all_gather:10
#: megengine.distributed.functional.reduce_scatter_sum:10 of
msgid "rank of this node"
msgstr "该节点的编号"
#: megengine.distributed.functional.all_gather
#: megengine.distributed.functional.all_reduce_max
#: megengine.distributed.functional.all_reduce_min
#: megengine.distributed.functional.all_reduce_sum
#: megengine.distributed.functional.bcast_param
#: megengine.distributed.functional.broadcast
#: megengine.distributed.functional.reduce_scatter_sum
#: megengine.distributed.functional.reduce_sum
#: megengine.distributed.helper.collective_comm_symvar
#: megengine.distributed.util.get_backend
#: megengine.distributed.util.get_free_ports
#: megengine.distributed.util.get_group_id
#: megengine.distributed.util.get_master_ip
#: megengine.distributed.util.get_master_port
#: megengine.distributed.util.get_rank
#: megengine.distributed.util.get_world_size
#: megengine.distributed.util.group_barrier
#: megengine.distributed.util.init_process_group
#: megengine.distributed.util.is_distributed of
msgid "Return type"
msgstr "返回类型"
#: megengine.distributed.functional.all_gather:13
#: megengine.distributed.functional.all_reduce_max:11
#: megengine.distributed.functional.all_reduce_min:11
#: megengine.distributed.functional.all_reduce_sum:11
#: megengine.distributed.functional.broadcast:13
#: megengine.distributed.functional.reduce_scatter_sum:13
#: megengine.distributed.functional.reduce_sum:13 of
msgid ":py:class:`~megengine.core.tensor.Tensor`"
msgstr ":py:class:`~megengine.core.tensor.Tensor`"
#: megengine.distributed.functional.all_reduce_max:1 of
msgid "Create all_reduce_max operator for collective communication"
msgstr "创建用于聚合通信(collective communication)的all_reduce_max算子"
#: megengine.distributed.functional.all_reduce_min:1 of
msgid "Create all_reduce_min operator for collective communication"
msgstr "创建用于聚合通信(collective communication)的all_reduce_min算子"
#: megengine.distributed.functional.all_reduce_sum:1 of
msgid "Create all_reduce_sum operator for collective communication"
msgstr "创建用于聚合通信(collective communication)的all_reduce_sum算子"
#: megengine.distributed.functional.bcast_param:1 of
msgid "Broadcast parameters among devices"
msgstr "向多个设备广播参数"
#: megengine.distributed.functional.bcast_param:4 of
msgid "input Buffer or Parameter to be synchronized"
msgstr "待同步的输入Buffer或者Parameter"
#: megengine.distributed.functional.bcast_param:10
#: megengine.distributed.functional.broadcast:10
#: megengine.distributed.functional.reduce_sum:10 of
msgid "whether this is a root node"
msgstr "该节点是否为根节点"
#: megengine.distributed.functional.bcast_param:13
#: megengine.distributed.util.group_barrier:4
#: megengine.distributed.util.init_process_group:17 of
msgid "``None``"
msgstr "``None``"
#: megengine.distributed.functional.broadcast:1 of
msgid "Create broadcast operator for collective communication"
msgstr "创建聚合通信(collective communication)的广播算子"
#: megengine.distributed.functional.reduce_scatter_sum:1 of
msgid "Create reduce_scatter_sum operator for collective communication"
msgstr "创建聚合通信(collective communication)的reduce_scatter_sum算子"
#: megengine.distributed.functional.reduce_sum:1 of
msgid "Create reduce_sum operator for collective communication"
msgstr "创建聚合通信(collective communication)的reduce_sum算子"
#: ../../source/api_zh/megengine.distributed.rst:27
msgid "megengine.distributed.helper"
msgstr "megengine.distributed.helper"
#: megengine.distributed.helper.collective_comm_symvar:1 of
msgid "Helper function for creating collective_comm operators"
msgstr "用于创建collective_comm算子的辅助函数"
#: megengine.distributed.helper.collective_comm_symvar:4 of
msgid "tensor or comp_graph"
msgstr "张量或计算图"
#: megengine.distributed.helper.collective_comm_symvar:8 of
msgid "mode of collective communication"
msgstr "聚合通信模式"
#: megengine.distributed.helper.collective_comm_symvar:12 of
msgid "whether this node is root node"
msgstr "该节点是否为根节点"
#: megengine.distributed.helper.collective_comm_symvar:14 of
msgid "output data type, use dtype of inp as default"
msgstr "输出数据的类型,默认使用inp的dtype"
#: megengine.distributed.helper.collective_comm_symvar:16 of
msgid "output comp node, use comp node of inp as default"
msgstr "输出的计算节点,默认使用inp的计算节点"
#: megengine.distributed.helper.collective_comm_symvar:18 of
msgid "output comp graph, use comp graph of inp as default"
msgstr "输出的计算图,默认使用inp的计算图"
#: megengine.distributed.helper.collective_comm_symvar:21 of
msgid ":py:class:`~megengine._internal.mgb.SymbolVar`"
msgstr ":py:class:`~megengine._internal.mgb.SymbolVar`"
#: ../../source/api_zh/megengine.distributed.rst:35
msgid "megengine.distributed.util"
msgstr "megengine.distributed.util"
#: megengine.distributed.util.get_backend:1 of
msgid "Get the backend str"
msgstr "获取字符串形式表示的后端"
#: megengine.distributed.util.get_backend:4
#: megengine.distributed.util.get_master_ip:4 of
msgid ":py:class:`str`"
msgstr ":py:class:`str`"
#: megengine.distributed.util.get_free_ports:1 of
msgid "Get one or more free ports."
msgstr "获得一个或多个空闲端口。"
#: megengine.distributed.util.get_free_ports:5 of
msgid ":py:class:`~typing.List`\\[:py:class:`int`]"
msgstr ":py:class:`~typing.List`\\[:py:class:`int`]"
#: megengine.distributed.util.get_group_id:1 of
msgid "Get group id for collective communication"
msgstr "获得聚合通信的group id"
#: megengine.distributed.util.get_group_id:4
#: megengine.distributed.util.get_master_port:4
#: megengine.distributed.util.get_rank:4
#: megengine.distributed.util.get_world_size:4 of
msgid ":py:class:`int`"
msgstr ":py:class:`int`"
#: megengine.distributed.util.get_master_ip:1 of
msgid "Get the IP address of the master node"
msgstr "获取主节点的IP地址"
#: megengine.distributed.util.get_master_port:1 of
msgid "Get the port of the rpc server on the master node"
msgstr "获取主节点上RPC服务器的端口"
#: megengine.distributed.util.get_rank:1 of
msgid "Get the rank of the current process"
msgstr "获取当前进程的进程号"
#: megengine.distributed.util.get_world_size:1 of
msgid "Get the total number of processes participating in the job"
msgstr "获取的参与任务的进程总数"
#: megengine.distributed.util.group_barrier:1 of
msgid "Block until all ranks in the group reach this barrier"
msgstr "阻止调用,直到组中的所有进程达到这个障碍点(barrier)"
#: megengine.distributed.util.init_process_group:1 of
msgid ""
"Initialize the distributed process group, and also specify the device used "
"in the current process."
msgstr "初始化分布式进程组,并且指定在当前进程中使用的设备。"
#: megengine.distributed.util.init_process_group:4 of
msgid "IP address of the master node."
msgstr "主节点的IP地址。"
#: megengine.distributed.util.init_process_group:6 of
msgid "Port available for all processes to communicate."
msgstr "所有进程之间进行通信的可用端口。"
#: megengine.distributed.util.init_process_group:8 of
msgid "Total number of processes participating in the job."
msgstr "参与任务的进程总数。"
#: megengine.distributed.util.init_process_group:10 of
msgid "Rank of the current process."
msgstr "当前进程的进程号。"
#: megengine.distributed.util.init_process_group:12 of
msgid "The GPU device id to bind this process to."
msgstr "待与该进程绑定的GPU设备号"
#: megengine.distributed.util.init_process_group:14 of
msgid "Communicator backend, currently support 'nccl' and 'ucx'"
msgstr "Communicator的后端,目前支持'NCCL'和'UCX'"
#: megengine.distributed.util.is_distributed:1 of
msgid "Return True if the distributed process group has been initialized"
msgstr "如果分布式进程组已完成初始化则返回True"
#: megengine.distributed.util.is_distributed:4 of
msgid ":py:class:`bool`"
msgstr ":py:class:`bool`"
#: megengine.distributed.util.synchronized:1 of
msgid ""
"Decorator. Decorated function will synchronize when finished. Specifically, "
"we use this to prevent data race during hub.load"
msgstr "装饰器(Decorator)。结束后,装饰后的函数会被同步。实际应用上,我们用它来防止hub.load期间的数据竞争"
#~ msgid "rank of the current process, use util.get_rank() as default"
#~ msgstr "当前进程的进程序号(rank),默认使用util.get_rank()"
#~ msgid "rank of root node, use 0 as default"
#~ msgstr "根节点(根进程)的进程序号(rank),使用0作为默认值"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-05-27 03:15-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/autogen/megengine.hub.rst:2
msgid "megengine.hub package"
msgstr "megengine.hub package"
#: ../../source/autogen/megengine.hub.rst:11
msgid "megengine.hub.const"
msgstr "megengine.hub.const"
#: ../../source/autogen/megengine.hub.rst:19
msgid "megengine.hub.exceptions"
msgstr "megengine.hub.exceptions"
#: megengine.hub.exceptions.FetcherError:1 of
msgid "Bases: :class:`Exception`"
msgstr "基类 :class:`Exception`"
#: megengine.hub.exceptions.FetcherError:1 of
msgid "Base class for fetch related error."
msgstr "获取相关错误的基类。"
#: megengine.hub.exceptions.GitCheckoutError:1
#: megengine.hub.exceptions.GitPullError:1
#: megengine.hub.exceptions.InvalidGitHost:1
#: megengine.hub.exceptions.InvalidProtocol:1
#: megengine.hub.exceptions.InvalidRepo:1 of
msgid "Bases: :class:`megengine.hub.exceptions.FetcherError`"
msgstr "基类: :class:`megengine.hub.exceptions.FetcherError`"
#: megengine.hub.exceptions.GitCheckoutError:1 of
msgid "A git checkout error occurred"
msgstr "git checkout产生异常"
#: megengine.hub.exceptions.GitPullError:1 of
msgid "A git pull error occurred"
msgstr "git pull产生异常"
#: megengine.hub.exceptions.InvalidGitHost:1 of
msgid "The git host provided was somehow invalid."
msgstr "由于某些原因,提供的git host无效。"
#: megengine.hub.exceptions.InvalidProtocol:1 of
msgid "The protocol provided was somehow invalid"
msgstr "由于某些原因,提供的协议无效"
#: megengine.hub.exceptions.InvalidRepo:1 of
msgid "The repo provided was somehow invalid."
msgstr "由于某些原因,所提供的代码仓库无效。"
#: ../../source/autogen/megengine.hub.rst:27
msgid "megengine.hub.fetcher"
msgstr "megengine.hub.fetcher"
#: megengine.hub.fetcher.GitHTTPSFetcher:1
#: megengine.hub.fetcher.GitSSHFetcher:1 of
msgid "Bases: :class:`megengine.hub.fetcher.RepoFetcherBase`"
msgstr "基类: :class:`megengine.hub.fetcher.RepoFetcherBase`"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:1 of
msgid "Fetches git repo by HTTPS protocol"
msgstr "使用HTTPS协议拉取远端git代码仓库"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch
#: megengine.hub.fetcher.GitSSHFetcher.fetch megengine.hub.hub.help
#: megengine.hub.hub.import_module megengine.hub.hub.list
#: megengine.hub.hub.load megengine.hub.hub.load_serialized_obj_from_url
#: megengine.hub.tools.cd megengine.hub.tools.check_module_exists
#: megengine.hub.tools.load_module of
msgid "Parameters"
msgstr "参数"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:4 of
msgid "host address of git repo example: github.com"
msgstr "git repo的主机地址。例如:github.com"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:8
#: megengine.hub.fetcher.GitSSHFetcher.fetch:8 of
msgid ""
"a string with format ``\"repo_owner/repo_name[:tag_name/:branch_name]\"`` "
"with an optional tag/branch. The default branch is ``master`` if not "
"specified. example: ``\"brain_sdk/MegBrain[:hub]\"``"
msgstr ""
"格式为 ``\"repo_owner/repo_name[:tag_name/:branch_name]\"`` "
"的字符串,其中tag/branch是可选的。 若不指明,则默认分支是 ``master`` 。 例如: "
"``\"brain_sdk/MegBrain[:hub]\"`` "
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:13 megengine.hub.hub.help:20
#: megengine.hub.hub.import_module:13 megengine.hub.hub.list:13
#: megengine.hub.hub.load:16 of
msgid "whether to use locally cached code or completely re-fetch"
msgstr "选择使用本地缓存的代码或完全重新拉取代码"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:16
#: megengine.hub.fetcher.GitSSHFetcher.fetch:16 megengine.hub.hub.help:23
#: megengine.hub.hub.import_module:16 megengine.hub.hub.list:16
#: megengine.hub.hub.load:19 of
msgid "commit id on github or gitlab"
msgstr "GitHub或GitLab的commit id"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:19
#: megengine.hub.fetcher.GitSSHFetcher.fetch:19 of
msgid ""
"whether to accept the stdout and stderr of the subprocess with PIPE, instead"
" of displaying on the screen"
msgstr "是否通过Linux “管道” PIPE接受subprocess的stdout和stderr,而不是在屏幕上显示它们"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch
#: megengine.hub.fetcher.GitSSHFetcher.fetch
#: megengine.hub.fetcher.RepoFetcherBase.fetch megengine.hub.hub.help
#: megengine.hub.hub.list megengine.hub.hub.load
#: megengine.hub.hub.load_serialized_obj_from_url megengine.hub.tools.cd
#: megengine.hub.tools.check_module_exists megengine.hub.tools.load_module of
msgid "Return type"
msgstr "返回类型"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:22
#: megengine.hub.fetcher.GitSSHFetcher.fetch:22
#: megengine.hub.fetcher.RepoFetcherBase.fetch:2 megengine.hub.hub.help:29 of
msgid ":py:class:`str`"
msgstr ":py:class:`str`"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch
#: megengine.hub.fetcher.GitSSHFetcher.fetch megengine.hub.hub.help
#: megengine.hub.hub.import_module megengine.hub.hub.list
#: megengine.hub.hub.load megengine.hub.hub.load_serialized_obj_from_url of
msgid "Returns"
msgstr "返回"
#: megengine.hub.fetcher.GitHTTPSFetcher.fetch:23
#: megengine.hub.fetcher.GitSSHFetcher.fetch:23 of
msgid "directory where the repo code is stored"
msgstr "repo代码的存储路径"
#: megengine.hub.fetcher.GitSSHFetcher.fetch:1 of
msgid "Fetches git repo by SSH protocol"
msgstr "使用SSH协议拉取远端git代码仓库"
#: megengine.hub.fetcher.GitSSHFetcher.fetch:4 of
msgid "host address of git repo. example: github.com"
msgstr "git repo的主机地址。例如:github.com"
#: megengine.hub.fetcher.GitSSHFetcher.fetch:13 of
msgid "whether to use locally fetched code or completely re-fetch"
msgstr "选择使用本地缓存的代码或完全重新拉取代码"
#: megengine.hub.fetcher.RepoFetcherBase:1 megengine.hub.hub.pretrained:1 of
msgid "Bases: :class:`object`"
msgstr "基类: :class:`object`"
#: ../../source/autogen/megengine.hub.rst:35
msgid "megengine.hub.hub"
msgstr "megengine.hub.hub"
#: megengine.hub.hub.list:1 of
msgid "Lists all entrypoints available in repo hubconf"
msgstr "列出repo中hubconf指定的所有可用的入口点"
#: megengine.hub.hub.help:8 megengine.hub.hub.import_module:4
#: megengine.hub.hub.list:4 megengine.hub.hub.load:4 of
msgid ""
"a string with format ``\"repo_owner/repo_name[:tag_name/:branch_name]\"`` "
"with an optional tag/branch. The default branch is ``master`` if not "
"specified. Example: ``\"brain_sdk/MegBrain[:hub]\"``"
msgstr ""
"格式为 ``\"repo_owner/repo_name[:tag_name/:branch_name]\"`` "
"的字符串,其中tag/branch是可选的。 若不指明,则默认分支是 ``master`` 。 例如: "
"``\"brain_sdk/MegBrain[:hub]\"`` "
#: megengine.hub.hub.help:16 megengine.hub.hub.import_module:9
#: megengine.hub.hub.list:9 megengine.hub.hub.load:12 of
msgid "host address of git repo Example: github.com"
msgstr "git仓库的主机地址,例如:github.com"
#: megengine.hub.hub.help:26 megengine.hub.hub.import_module:19
#: megengine.hub.hub.list:19 megengine.hub.hub.load:22 of
msgid ""
"which protocol to use to get the repo, and HTTPS protocol only supports "
"public repo on github. The value should be one of HTTPS, SSH."
msgstr "获得代码仓库所使用的协议,其中,HTTPS协议只支持github公共仓库。该参数值可为HTTPS 或 SSH。"
#: megengine.hub.hub.list:22 of
msgid ":py:class:`~typing.List`\\[:py:class:`str`]"
msgstr ":py:class:`~typing.List`\\[:py:class:`str`]"
#: megengine.hub.hub.list:23 of
msgid "all entrypoint names of the model"
msgstr "该模型的所有入口点(entrypoint)名称"
#: megengine.hub.hub.load:1 of
msgid "Loads model from github or gitlab repo, with pretrained weights."
msgstr "从GitHub或GitLab中加载具有预训练权重的模型。"
#: megengine.hub.hub.load:9 of
msgid "an entrypoint defined in hubconf"
msgstr "一个在hubconf中定义的入口点"
#: megengine.hub.hub.load:25 megengine.hub.hub.load_serialized_obj_from_url:10
#: of
msgid ":py:data:`~typing.Any`"
msgstr ":py:data:`~typing.Any`"
#: megengine.hub.hub.load:26 of
msgid "a single model with corresponding pretrained weights."
msgstr "单个模型,具有对应的预训练的权重。"
#: megengine.hub.hub.help:1 of
msgid ""
"This function returns docstring of entrypoint ``entry`` by following steps:"
msgstr "通过以下步骤,该函数返回入口点 ``entry`` 的docstring :"
#: megengine.hub.hub.help:3 of
msgid "Pull the repo code specified by git and repo_info"
msgstr "拉取下来git和repo_info指定的仓库代码"
#: megengine.hub.hub.help:4 of
msgid "Load the entry defined in repo's hubconf.py"
msgstr "加载仓库中hubconf.py所定义的条目"
#: megengine.hub.hub.help:5 of
msgid "Return docstring of function entry"
msgstr "返回函数入口的docstring(文档字符串)"
#: megengine.hub.hub.help:13 of
msgid "an entrypoint defined in hubconf.py"
msgstr "一个在hubconf.py中定义的入口点"
#: megengine.hub.hub.help:30 of
msgid "docstring of entrypoint ``entry``"
msgstr "入口点 ``entry`` 的文档字符串"
#: megengine.hub.hub.load_serialized_obj_from_url:1 of
msgid "Loads MegEngine serialized object from the given URL."
msgstr "加载给定URL中的MegEngine序列化对象。"
#: megengine.hub.hub.load_serialized_obj_from_url:3 of
msgid ""
"If the object is already present in ``model_dir``, it's deserialized and "
"returned. If no ``model_dir`` is specified, it will be "
"``MGE_HOME/serialized``."
msgstr ""
"如果对象已经在 ``model_dir`` 中,它会被反序列化并返回。如果没有指定 ``model_dir`` 时, 会被默认为 ``MGE_HOME "
"/ serialized`` 。"
#: megengine.hub.hub.load_serialized_obj_from_url:7 of
msgid "url to serialized object"
msgstr "序列化对象所在的url地址"
#: megengine.hub.hub.load_serialized_obj_from_url:8 of
msgid "dir to cache target serialized file"
msgstr "缓存目标序列文件的路径"
#: megengine.hub.hub.load_serialized_obj_from_url:11 of
msgid "loaded object"
msgstr "被加载入的对象"
#: megengine.hub.hub.pretrained:1 of
msgid ""
"Decorator which helps to download pretrained weights from the given url."
msgstr "装饰器,用来标识预训练权重的 url,以便于载入时自动下载权重。"
#: megengine.hub.hub.pretrained:3 of
msgid "For example, we can decorate a resnet18 function as follows"
msgstr "例如,我们可以按以下方式装饰一个resnet18函数"
#: megengine.hub.hub.pretrained:11 of
msgid ""
"When decorated function is called with ``pretrained=True``, MegEngine will "
"automatically download and fill the returned model with pretrained weights."
msgstr "当被装饰的函数具有参数 ``pretrained=True`` 时,M​​egEngine则自动下载并对返回的模型填入预训练的权重。"
#: megengine.hub.hub.import_module:1 of
msgid "Imports hubmodule like python import"
msgstr "以类似python import的方式 import hubmodule"
#: megengine.hub.hub.import_module:22 of
msgid "hubconf.py as a python module"
msgstr "作为一个python模块的hubconf.py"
#: ../../source/autogen/megengine.hub.rst:43
msgid "megengine.hub.tools"
msgstr "megengine.hub.tools"
#: megengine.hub.tools.cd:1 of
msgid "Changes current directory to target"
msgstr "将当前路径改为target路径"
#: megengine.hub.tools.cd:4 of
msgid "target directory"
msgstr "目标路径"
#: megengine.hub.tools.cd:7 of
msgid ":py:class:`~typing.Iterator`\\[``None``]"
msgstr ":py:class:`~typing.Iterator`\\[``None``]"
#: megengine.hub.tools.check_module_exists:1 of
msgid "Checks whether python module exists or not"
msgstr "检查Python模块是否存在"
#: megengine.hub.tools.check_module_exists:4 of
msgid "name of module"
msgstr "模块的名称"
#: megengine.hub.tools.check_module_exists:7 of
msgid ":py:class:`bool`"
msgstr ":py:class:`bool`"
#: megengine.hub.tools.load_module:1 of
msgid "Loads module specified by name and path"
msgstr "加载由 ``name`` 和 ``path`` 指定的模块"
#: megengine.hub.tools.load_module:4 of
msgid "module name"
msgstr "模块名称"
#: megengine.hub.tools.load_module:6 of
msgid "module path"
msgstr "模块路径"
#: megengine.hub.tools.load_module:9 of
msgid ":py:class:`module`"
msgstr ":py:class:`module`"
#~ msgid "Fetch git repo by HTTPS protocol"
#~ msgstr "通过HTTPS协议拉取git仓库"
#~ msgid "host address of git repo dxample: github.com"
#~ msgstr "git repo的主机地址,例如:github.com"
#~ msgid ""
#~ "a string with format ``\"repo_owner/repo_name[:tag_name/:branch_name]\"`` "
#~ "with an optional tag/branch. The default branch is ``master`` if not "
#~ "specified. dxample: ``\"brain_sdk/MegBrain[:hub]\"``"
#~ msgstr ""
#~ "格式为 ``\"repo_owner/repo_name[:tag_name/:branch_name]\"`` "
#~ "的字符串,其中tag/branch是可选的。 若不指明,则默认分支是 ``master`` 。 例如: "
#~ "``\"brain_sdk/MegBrain[:hub]\"`` "
#~ msgid ""
#~ "Decorator helps quick link model function to existing pretrained weights."
#~ msgstr "装饰器(Decorator),有助于迅速关联模型函数到现有的预训练的权重。"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/api_zh/megengine.jit.rst:2
msgid "megengine.jit package"
msgstr "megengine.jit package"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:1
#: megengine.jit.trace:1 megengine.jit.unset:1 of
msgid "Bases: :class:`object`"
msgstr "Bases: :class:`object`"
#: megengine.jit.trace:1 of
msgid "Wrap a callable and provide:"
msgstr "包装一个callable对象,并提供以下功能:"
#: megengine.jit.trace:3 of
msgid "tracing via :meth:`.trace` and :meth:`.dump`"
msgstr "通过 :meth:`.trace` 和 :meth:`.dump` 实现追溯(tracing)"
#: megengine.jit.trace:4 of
msgid "accelerated evalutaion via :meth:`.__call__`"
msgstr "通过调用 :meth:`.__call__` 加速执行"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig
#: megengine.jit.trace megengine.jit.trace.dump of
msgid "Parameters"
msgstr "参数"
#: megengine.jit.trace:7 of
msgid "Positional only argument."
msgstr "位置型参数(Positional only argument)"
#: megengine.jit.trace:9 of
msgid "Whether to use symbolic tensor. Default: False"
msgstr "是否使用符号型张量(symbolic tensor)。默认:False。"
#: megengine.jit.trace:11 of
msgid "Optimization level for compiling trace."
msgstr "编译追踪函数时的优化级别。"
#: megengine.jit.trace:13 of
msgid "Log level."
msgstr "日志级别。"
#: megengine.jit.trace:15 of
msgid ""
"Configuration for sublinear memory optimization. If not None, it enables "
"sublinear memory optimization with given setting."
msgstr "配置次线性内存优化。如果不是None,则使用给定设置进行次线性内存优化。"
#: megengine.jit.trace:18 of
msgid ""
"Maximum size of an allreduce pack in MB. If not None, multiple gradients "
"will be packed and synchronized together"
msgstr "单个allreduce包的最大容量,以MB为单位。如果非None,则多个梯度将会被打包并一起同步"
#: megengine.jit.trace:21 of
msgid "Whether to profile compiled trace. Default: False"
msgstr "是否对编译好的函数追溯(trace)进行性能评估(profile)。"
#: megengine.jit.trace.__call__:1 of
msgid ""
"Evaluate on provided arguments, using compiled trace instead of the original"
" callable if applicable."
msgstr "对编译好的函数追溯(trace)在给定参数下执行,而不评估原来的callable对象(如果适用)。"
#: megengine.jit.trace.__call__ megengine.jit.trace.get_profile of
msgid "Returns"
msgstr "返回"
#: megengine.jit.trace.__call__:4 of
msgid ""
"``None`` or :class:`~.Tensor` or tuple of :class:`~.Tensor`, depending on "
"the return value of wrapped callable."
msgstr ""
"``None`` 或 :class:`~.Tensor` 或 tuple of :class:`~.Tensor` "
"。需要根据包装后的callable对象返回的值来确定类型。"
#: megengine.jit.trace.dump:1 of
msgid "Serialize trace to file system."
msgstr "序列化被追溯 (trace) 的模型并保存到文件。"
#: megengine.jit.trace.dump:3 of
msgid "positional only argument. Path of output file."
msgstr "仅作为位置参数(positional only argument)。输出文件所在路径。"
#: megengine.jit.trace.dump:4 of
msgid "names of the input tensors in the traced function"
msgstr "被追溯(traced)函数的输入张量的名字"
#: megengine.jit.trace.dump:5 of
msgid "whether output is appended to ``fpath``"
msgstr "是否在 ``fpath`` 后追加输出"
#: megengine.jit.trace.dump:6 of
msgid ""
"whether to use float16 for I/O between oprs and use float32 as internal "
"computation precision. Note the output var would be changed to float16"
msgstr "是否使用float16作为算子间I/O的数据精度,同时float32作为内部计算的数据精度。注意输出变量的类型也随之更改为float16。"
#: megengine.jit.trace.dump:9 of
msgid "whether to use float16 for both I/O and computation precision"
msgstr "是否使用float16同时作为算子间I/O和内部计算的数据精度"
#: megengine.jit.trace.dump:11 of
msgid ""
"whether to use NHWCD4 data format. This is faster on some OpenCL devices"
msgstr "是否使用NHWCD4数据格式。在某些OpenCL设备上,会提高计算速度"
#: megengine.jit.trace.dump:13 of
msgid ""
"whether to fuse conv+bias+nonlinearty into one opr. This is supported only "
"in NHWCD4 format."
msgstr "是否融合 CONV + bias + 非线性激活子成一个算子。仅在NHWCD4格式中支持。"
#: megengine.jit.trace.get_profile:1 of
msgid "Get profiling result for compiled trace."
msgstr "获取被追溯(trace)函数在编译后运行的性能结果。"
#: megengine.jit.trace.get_profile:3 of
msgid "a json compatible object."
msgstr "一个兼容json的对象。"
#: megengine.jit.trace.trace:1 of
msgid "Trace wrapped callable with provided arguments."
msgstr "使用提供的参数,追溯(trace)包装后的callable对象"
#: ../../source/api_zh/megengine.jit.rst:11
msgid "megengine.jit.sublinear\\_memory\\_config"
msgstr "megengine.jit.sublinear\\_memory\\_config"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:1 of
msgid "Configuration for sublinear memory optimization."
msgstr "次线性内存优化的配置。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:4 of
msgid ""
"number of samples both for searching in linear space and around current "
"thresh in sublinear memory optimization. Default: 10. It can also be set "
"through the environmental variable 'MGB_SUBLINEAR_MEMORY_THRESH_NR_TRY'."
msgstr ""
"线性空间中以及次线性内存优化的当前范围中搜索的样本数目。默认:10。也可以通过环境变量 "
"'MGB_SUBLINEAR_MEMORY_THRESH_NR_TRY' 进行设置。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:8 of
msgid ""
"number of iterations to find the best checkpoints in genetic algorithm. "
"Default: 0. It can also be set through the environmental variable "
"'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER'."
msgstr ""
"使用遗传算法寻找最优切分策略时的迭代轮数。默认:0。也可以通过环境变量 'MGB_SUBLINEAR_MEMORY_GENETIC_NR_ITER' "
"进行设置。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:12 of
msgid ""
"number of samples for the crossover random selection during genetic "
"optimization. Default: 20. It can also be set through the environmental "
"variable 'MGB_SUBLINEAR_MEMORY_GENETIC_POOL_SIZE'."
msgstr ""
"遗传优化算法进行交叉随机选择(crossover)时所使用的样本数。默认:20。也可以通过环境变量 "
"'MGB_SUBLINEAR_MEMORY_GENETIC_POOL_SIZE' 进行设置。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:16 of
msgid ""
"memory lower bound of bottleneck size in MB for sublinear memory "
"optimization. It can be used to perform manual tradeoff between memory and "
"speed. Default: 0. It can also be set through the environmental variable "
"'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB'."
msgstr ""
"次线性内存优化瓶颈大小的下界(以MB为单位)。它可用于在内存和速度之间进行手动权衡。默认:0。也可以通过设置环境变量 "
"'MGB_SUBLINEAR_MEMORY_LOWER_BOUND_MB' 来实现。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:20 of
msgid ""
"number of thread workers to search the optimum checkpoints in sublinear "
"memory optimization. Default: half of cpu number in the system. Note: the "
"value must be greater or equal to one. It can also be set through the "
"environmental variable 'MGB_SUBLINEAR_MEMORY_WORKERS'."
msgstr ""
"搜索次线性内存优化最优切分策略时使用的线程数。默认:当前系统中CPU数目的一半。注意:该参数值需要大于等于1。也可以通过设置环境变量 "
"'MGB_SUBLINEAR_MEMORY_WORKERS'来实现。"
#: megengine.jit.sublinear_memory_config.SublinearMemoryConfig:25 of
msgid ""
"Note that the environmental variable MGB_COMP_GRAPH_OPT must be set to "
"'enable_sublinear_memory_opt=1' in order for the above environmental "
"variable to be effective."
msgstr ""
"注意,为了使上述环境变量生效,需要将环境变量 MGB_COMP_GRAPH_OPT 设置为 "
"'enable_sublinear_memory_opt=1' 。"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-05-11 18:27+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/autogen/megengine.module.pytorch.rst:2
msgid "megengine.module.pytorch package"
msgstr "megengine.module.pytorch package"
#: ../../source/autogen/megengine.module.pytorch.rst:11
msgid "megengine.module.pytorch.pytorch"
msgstr "megengine.module.pytorch.pytorch"
#: megengine.module.pytorch.pytorch.PyTorchModule:1 of
msgid "Bases: :class:`megengine.module.module.Module`"
msgstr "基类: :class:`megengine.module.module.Module`"
#: megengine.module.pytorch.pytorch.PyTorchModule:1 of
msgid "Wrap a pytorch module as megengine module"
msgstr "把一个pytorch模块包装为megengine模块"
#: megengine.module.pytorch.pytorch.PyTorchModule
#: megengine.module.pytorch.pytorch.PyTorchModule.get_param_by_name
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.execute
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.infer_shape
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.setup
#: megengine.module.pytorch.pytorch.inp_mem_fwd
#: megengine.module.pytorch.pytorch.oup_mem_fwd
#: megengine.module.pytorch.pytorch.torch_param_to_mge
#: megengine.module.pytorch.utils.device_to_torch_device
#: megengine.module.pytorch.utils.torch_device_to_device
#: megengine.module.pytorch.utils.torch_dtype_to_numpy_dtype of
msgid "Parameters"
msgstr "参数"
#: megengine.module.pytorch.pytorch.PyTorchModule:3 of
msgid "torch module to be wrapped"
msgstr "待封装的torch模块"
#: megengine.module.pytorch.pytorch.PyTorchModule:4 of
msgid "target device this module would be in"
msgstr "该模块将位于的目标设备"
#: megengine.module.pytorch.pytorch.PyTorchModule:5 of
msgid "output count of this module"
msgstr "该模块的输出计数"
#: megengine.module.pytorch.pytorch.PyTorchModule:6 of
msgid "input shape inferrer"
msgstr "输入形状推断器"
#: megengine.module.pytorch.pytorch.PyTorchModule:7 of
msgid "target comp_graph on which this module would be in"
msgstr "此模块将位于的目标计算图comp_graph"
#: megengine.module.pytorch.pytorch.PyTorchModule.device:1
#: megengine.module.pytorch.pytorch.PyTorchModule.get_device:1 of
msgid "get the device this module belongs to"
msgstr "得到这个模块的所属设备"
#: megengine.module.pytorch.pytorch.PyTorchModule.forward:1 of
msgid "apply the module on given inputs"
msgstr "在给定的输入上应用该模块"
#: megengine.module.pytorch.pytorch.PyTorchModule.forward
#: megengine.module.pytorch.pytorch.PyTorchModule.get_param_by_name
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.infer_shape
#: megengine.module.pytorch.pytorch.torch_param_to_mge
#: megengine.module.pytorch.utils.device_to_torch_device
#: megengine.module.pytorch.utils.torch_device_to_device
#: megengine.module.pytorch.utils.torch_dtype_to_numpy_dtype of
msgid "Returns"
msgstr "返回"
#: megengine.module.pytorch.pytorch.PyTorchModule.forward:3 of
msgid "output vars"
msgstr "输出变量"
#: megengine.module.pytorch.pytorch.PyTorchModule.get_param_by_name:1 of
msgid "find parameter by its name"
msgstr "通过参数名找到参数"
#: megengine.module.pytorch.pytorch.PyTorchModule.get_param_by_name:4 of
msgid "name of parameter"
msgstr "参数名"
#: megengine.module.pytorch.pytorch.PyTorchModule.get_param_by_name
#: megengine.module.pytorch.pytorch.inp_mem_fwd
#: megengine.module.pytorch.pytorch.oup_mem_fwd
#: megengine.module.pytorch.pytorch.torch_param_to_mge of
msgid "Return type"
msgstr "返回类型"
#: megengine.module.pytorch.pytorch.PyTorchModule.get_param_by_name:5
#: megengine.module.pytorch.pytorch.torch_param_to_mge:11 of
msgid ":py:class:`~megengine.core.tensor_nn.Parameter`"
msgstr ":py:class:`~megengine.core.tensor_nn.Parameter`"
#: megengine.module.pytorch.pytorch.PyTorchModule.get_param_by_name:6 of
msgid "the parameter"
msgstr "参数"
#: megengine.module.pytorch.pytorch.PyTorchModule.init_params:1 of
msgid ""
"forward torch parameters to megengine parameters and store, would be called "
"in constructor and setter of device"
msgstr "将torch参数送至megengine参数并存储。该方法会在设备的构造器(constructor)和设置器(setter)中被调用。"
#: megengine.module.pytorch.pytorch.PyTorchModule.set_device:1 of
msgid "set the device and move torch module to corresponding device"
msgstr "设置并移动torch模块到该设备。"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr:1 of
msgid "Bases: :class:`megengine._internal.craniotome.CraniotomeBase`"
msgstr "基类: :class:`megengine._internal.craniotome.CraniotomeBase`"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr:1 of
msgid "This is a pytorch module wrapper to operator"
msgstr "pytorch模块包装器,用于封装算子"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.copy:1 of
msgid ""
"copy this craniotome descriptor; the default implementation creates a new "
"object, and copies object ``__dict__``"
msgstr "决定描述符(descriptor)将如何被复制;默认实现是创建一个新对象,并复制对象的 ``__dict__`` 。"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.execute:1 of
msgid ""
"execute the operator, read values from *inputs*, forward them to torch "
"tensor and do execution by self.func and forward results to outputs"
msgstr "执行算子。从 *inputs* 中读值,送至torch张量然后由self.func执行,最后将结果送至输出"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.execute:6 of
msgid "values for each input var"
msgstr "每个输入变量的值"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.execute:8 of
msgid "values for each output var"
msgstr "每个输出变量的值"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad:1 of
msgid ""
"generate a grad opr which calculates grad by torch.autograd.grad and cache "
"it"
msgstr "生成梯度算子,它使用torch.autograd.grad计算梯度并缓存。"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad:3 of
msgid "the input var with respect to which the gradient should be computed"
msgstr "梯度计算相应的输入变量"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad:6 of
msgid "operator inputs"
msgstr "算子输入"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad:8 of
msgid "operator outputs"
msgstr "算子输出"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad:10 of
msgid "gradients of each output var"
msgstr "每个输出变量对应的梯度"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.grad:11 of
msgid "an initialized grad opr"
msgstr "已初始化的梯度算子"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.infer_shape:1 of
msgid "infer output shape from input shapes"
msgstr "从输入的形状推断输出形状"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.infer_shape:3 of
msgid "input shapes as tuple"
msgstr "元组形式的输入形状"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.infer_shape:4 of
msgid "output shapes"
msgstr "输出形状"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.setup:1 of
msgid "Setup the operator by accepted kwargs"
msgstr "通过接受kwargs参数设置算子"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.setup:3 of
msgid "input count of torch module"
msgstr "torch模块的输入计数"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.setup:4 of
msgid "output count of torch module"
msgstr "torch模块的输出计数"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.setup:5 of
msgid ""
"a callable object accept inputs and returns outputs usually a torch module "
"itself"
msgstr "一个可调用对象,接受输入并返回输出。通常是一个torch模块本身"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.setup:7 of
msgid "parameters of the torch module"
msgstr "torch模块的参数"
#: megengine.module.pytorch.pytorch.PyTorchSubgraphImplOpr.setup:8 of
msgid "a callable infers output shapes from input shapes, defaults to None"
msgstr "一个可调用对象,从输入的形状推断输出形状,默认为None"
#: megengine.module.pytorch.pytorch.inp_mem_fwd:1 of
msgid "Forward a MegBrain tensor to torch tensor"
msgstr "将MegBrain张量送至torch张量。"
#: megengine.module.pytorch.pytorch.inp_mem_fwd:4 of
msgid "pointer to MegBrain tensor"
msgstr "指向MegBrain张量的指针"
#: megengine.module.pytorch.pytorch.inp_mem_fwd:7 of
msgid ":py:class:`~torch.Tensor`"
msgstr ":py:class:`~torch.Tensor`"
#: megengine.module.pytorch.pytorch.oup_mem_fwd:1 of
msgid "Forward a torch tensor to a contiguous MegBrain tensor"
msgstr "将torch张量送至连续型(contiguous)MegBrain张量"
#: megengine.module.pytorch.pytorch.oup_mem_fwd:4 of
msgid "Pointer to the MegBrain tensor"
msgstr "指向MegBrain张量的指针"
#: megengine.module.pytorch.pytorch.oup_mem_fwd:6 of
msgid "The input torch tensor"
msgstr "输入torch张量"
#: megengine.module.pytorch.pytorch.oup_mem_fwd:8 of
msgid ""
"if True, memory copy is not allowed here, thus the input torch tensor must "
"be contiguous also. defaults to True"
msgstr "如果True,则此处不允许使用内存复制。因此输入torch张量必须为连续型(contiguous)。默认为True"
#: megengine.module.pytorch.pytorch.oup_mem_fwd:13 of
msgid "``None``"
msgstr "``None``"
#: megengine.module.pytorch.pytorch.torch_param_to_mge:1 of
msgid "Convert a torch parameter to a megengine parameter"
msgstr "将torch参数转换为megengine参数"
#: megengine.module.pytorch.pytorch.torch_param_to_mge:4 of
msgid "parametr name"
msgstr "参数名称"
#: megengine.module.pytorch.pytorch.torch_param_to_mge:6 of
msgid "torch parameter"
msgstr "torch参数"
#: megengine.module.pytorch.pytorch.torch_param_to_mge:7 of
msgid ""
"the device on which the megengine parameter is, should be physically the "
"same as the one on torch parameter"
msgstr "megengine参数所在的设备,应与torch参数所在设备物理上一致"
#: megengine.module.pytorch.pytorch.torch_param_to_mge:10 of
msgid "the owner graph of megengine parameter"
msgstr "megengine参数的所属的图"
#: megengine.module.pytorch.pytorch.torch_param_to_mge:12 of
msgid "megengine parameter"
msgstr "megengine参数"
#: ../../source/autogen/megengine.module.pytorch.rst:19
msgid "megengine.module.pytorch.utils"
msgstr "megengine.module.pytorch.utils"
#: megengine.module.pytorch.utils.device_to_torch_device:1 of
msgid "map device to torch device"
msgstr "将device映射到torch设备"
#: megengine.module.pytorch.utils.device_to_torch_device:4 of
msgid "megbrain compute node"
msgstr "megbrain计算节点"
#: megengine.module.pytorch.utils.device_to_torch_device:5 of
msgid "corresponding torch device"
msgstr "相应的torch设备"
#: megengine.module.pytorch.utils.torch_device_to_device:1 of
msgid "map torch device to device"
msgstr "映射torch设备到device"
#: megengine.module.pytorch.utils.torch_device_to_device:4 of
msgid "torch device"
msgstr "torch设备"
#: megengine.module.pytorch.utils.torch_device_to_device:5 of
msgid "device"
msgstr "设备"
#: megengine.module.pytorch.utils.torch_dtype_to_numpy_dtype:1 of
msgid "map torch dtype to numpy dtype"
msgstr "将torch dtype映射到numpy dtype"
#: megengine.module.pytorch.utils.torch_dtype_to_numpy_dtype:4 of
msgid "torch dtype"
msgstr "torch dtype"
#: megengine.module.pytorch.utils.torch_dtype_to_numpy_dtype:5 of
msgid "numpy dtype"
msgstr "numpy dtype"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/api_zh/megengine.module.qat.rst:2
msgid "megengine.module.qat package"
msgstr "megengine.module.qat package"
#: ../../source/api_zh/megengine.module.qat.rst:11
msgid "megengine.module.qat.concat"
msgstr "megengine.module.qat.concat"
#: megengine.module.qat.concat.Concat:1 of
msgid ""
"Bases: :class:`megengine.module.concat.Concat`, "
":class:`megengine.module.qat.module.QATModule`"
msgstr ""
"基类: :class:`megengine.module.concat.Concat`, "
":class:`megengine.module.qat.module.QATModule`"
#: megengine.module.qat.concat.Concat:1 of
msgid ""
"A :class:`~.QATModule` to do functional concat with QAT support. Could be "
"applied with :class:`~.Observer` and :class:`~.FakeQuantize`."
msgstr ""
"支持QAT,实现concat函数计算的 :class:`~.QATModule` 。可搭配使用 :class:`~.Observer` 与 "
":class:`~.FakeQuantize` 。"
#: megengine.module.qat.concat.Concat.from_float_module:1
#: megengine.module.qat.conv.Conv2d.from_float_module:1
#: megengine.module.qat.elemwise.Elemwise.from_float_module:1
#: megengine.module.qat.linear.Linear.from_float_module:1
#: megengine.module.qat.module.QATModule.from_float_module:1
#: megengine.module.qat.quant_dequant.DequantStub.from_float_module:1
#: megengine.module.qat.quant_dequant.QuantStub.from_float_module:1 of
msgid ""
"Return a :class:`~.QATModule` instance converted from a float "
":class:`~.Module` instance."
msgstr "返回从浮点型 :class:`~.Module` 实例转换而来的 :class:`~.QATModule` 实例。"
#: ../../source/api_zh/megengine.module.qat.rst:19
msgid "megengine.module.qat.conv"
msgstr "megengine.module.qat.conv"
#: megengine.module.qat.conv.Conv2d:1 of
msgid ""
"Bases: :class:`megengine.module.conv.Conv2d`, "
":class:`megengine.module.qat.module.QATModule`"
msgstr ""
"基类: :class:`megengine.module.conv.Conv2d`, "
":class:`megengine.module.qat.module.QATModule`"
#: megengine.module.qat.conv.Conv2d:1 of
msgid ""
"A :class:`~.QATModule` Conv2d with QAT support. Could be applied with "
":class:`~.Observer` and :class:`~.FakeQuantize`."
msgstr ""
"支持QAT的 :class:`~.QATModule` 版本Conv2d。可搭配使用 :class:`~.Observer` 与 "
":class:`~.FakeQuantize` 。"
#: megengine.module.qat.conv.ConvRelu2d:1 of
msgid "Bases: :class:`megengine.module.qat.conv.Conv2d`"
msgstr "基类: :class:`megengine.module.qat.conv.Conv2d`"
#: megengine.module.qat.conv.ConvRelu2d:1 of
msgid ""
"A :class:`~.QATModule` include Conv2d and Relu with QAT support. Could be "
"applied with :class:`~.Observer` and :class:`~.FakeQuantize`."
msgstr ""
"支持QAT的包含Conv2d与Relu的 :class:`~.QATModule` 。可搭配使用 :class:`~.Observer` 与 "
":class:`~.FakeQuantize` 。"
#: ../../source/api_zh/megengine.module.qat.rst:27
msgid "megengine.module.qat.conv\\_bn"
msgstr "megengine.module.qat.conv\\_bn"
#: megengine.module.qat.conv_bn.ConvBn2d:1
#: megengine.module.qat.conv_bn.ConvBnRelu2d:1 of
msgid "Bases: :class:`megengine.module.qat.conv_bn._ConvBnActivation2d`"
msgstr "基类: :class:`megengine.module.qat.conv_bn._ConvBnActivation2d`"
#: megengine.module.qat.conv_bn.ConvBn2d:1 of
msgid ""
"A fused :class:`~.QATModule` including Conv2d, BatchNorm2d with QAT support."
" Could be applied with :class:`~.Observer` and :class:`~.FakeQuantize`."
msgstr ""
"支持QAT并融合Conv2d和BatchNorm2d的 :class:`~.QATModule` 。可搭配使用 :class:`~.Observer` 与 "
":class:`~.FakeQuantize` 。"
#: megengine.module.qat.conv_bn.ConvBnRelu2d:1 of
msgid ""
"A fused :class:`~.QATModule` including Conv2d, BatchNorm2d and relu with QAT"
" support. Could be applied with :class:`~.Observer` and "
":class:`~.FakeQuantize`."
msgstr ""
"支持QAT并融合Conv2d,BatchNorm2d和relu的 :class:`~.QATModule` 。可搭配使用 "
":class:`~.Observer` 与 :class:`~.FakeQuantize` 。"
#: ../../source/api_zh/megengine.module.qat.rst:35
msgid "megengine.module.qat.elemwise"
msgstr "megengine.module.qat.elemwise"
#: megengine.module.qat.elemwise.Elemwise:1 of
msgid ""
"Bases: :class:`megengine.module.elemwise.Elemwise`, "
":class:`megengine.module.qat.module.QATModule`"
msgstr ""
"基类: :class:`megengine.module.elemwise.Elemwise`, "
":class:`megengine.module.qat.module.QATModule`"
#: megengine.module.qat.elemwise.Elemwise:1 of
msgid ""
"A :class:`~.QATModule` to do elemwise operator with QAT support. Could be "
"applied with :class:`~.Observer` and :class:`~.FakeQuantize`."
msgstr ""
"支持QAT,具有elemwise算子功能的 :class:`~.QATModule` 。可搭配使用 :class:`~.Observer` 和 "
":class:`~.FakeQuantize` 。"
#: megengine.module.qat.elemwise.Elemwise megengine.module.qat.linear.Linear
#: of
msgid "Parameters"
msgstr "参数"
#: megengine.module.qat.elemwise.Elemwise:4 of
msgid ""
"the elemwise method, see :class:`~.module.elemwise.Elemwise` for detail."
msgstr "elemwise方法,可参考 :class:`~.module.elemwise.Elemwise` 获取详情。"
#: ../../source/api_zh/megengine.module.qat.rst:43
msgid "megengine.module.qat.linear"
msgstr "megengine.module.qat.linear"
#: megengine.module.qat.linear.Linear:1 of
msgid ""
"Bases: :class:`megengine.module.linear.Linear`, "
":class:`megengine.module.qat.module.QATModule`"
msgstr ""
"基类: :class:`megengine.module.linear.Linear`, "
":class:`megengine.module.qat.module.QATModule`"
#: megengine.module.qat.linear.Linear:1 of
msgid ""
"A :class:`~.QATModule` version of :class:`~.module.linear.Linear`. Could be "
"applied with :class:`~.Observer` and :class:`~.FakeQuantize`."
msgstr ""
":class:`~.QATModule` 型 :class:`~.module.linear.Linear` 。可搭配使用 "
":class:`~.Observer` 和 :class:`~.FakeQuantize` 。"
#: megengine.module.qat.linear.Linear:5 of
msgid "size of each input sample."
msgstr "各输入样本的大小。"
#: megengine.module.qat.linear.Linear:7 of
msgid "size of each output sample."
msgstr "各输出样本的大小。"
#: megengine.module.qat.linear.Linear:9 of
msgid ""
"If set to ``False``, the layer will not learn an additive bias. Default: "
"``True``"
msgstr "若为 ``False`` , 该层不添加偏置。默认: ``True`` 。"
#: ../../source/api_zh/megengine.module.qat.rst:51
msgid "megengine.module.qat.module"
msgstr "megengine.module.qat.module"
#: megengine.module.qat.module.QATModule:1 of
msgid "Bases: :class:`megengine.module.module.Module`"
msgstr "基类: :class:`megengine.module.module.Module`"
#: megengine.module.qat.module.QATModule:1 of
msgid ""
"Base class of quantized-float related Module, basically for QAT and "
"Calibration."
msgstr "浮点数量化相关模块的基类。主要用于 QAT 和 Calibration 。"
#: megengine.module.qat.module.QATModule:3 of
msgid ""
"Use :meth:`~.QATModule.from_float_module` to generate a instance from float "
":class:`~.Module`. Or use :func:`~.quantize.quantize_qat` to do it "
"recursively and automatically."
msgstr ""
"使用 :meth:`~.QATModule.from_float_module` 从浮点数型 :class:`~.Module` 中生成实例。或使用 "
":func:`~.quantize.quantize_qat` 来自动递归进行此操作。"
#: megengine.module.qat.module.QATModule:6 of
msgid ""
"Can also be converted to :class:`~.QuantizedModule` for deployment using "
":func:`~.quantize.quantize` further."
msgstr ""
"可之后使用 :func:`~.quantize.quantize` 将该模块转为 :class:`~.QuantizedModule` 用于部署。"
#: megengine.module.qat.module.QATModule.apply_quant_activation:1
#: megengine.module.qat.module.QATModule.apply_quant_weight:1 of
msgid "Apply weight's observer and fake_quant from ``qconfig`` on ``target``."
msgstr "在 ``target`` 上根据 ``qconfig`` 应用权重observer以及fake_quant。"
#: megengine.module.qat.module.QATModule.get_activation_dtype:1 of
msgid "Get activation's quantization dtype as the method from ``qconfig``."
msgstr "按照qconfig指定的方法,从 ``qconfig`` 中获取激活值的量化数据类型。"
#: megengine.module.qat.module.QATModule.get_weight_dtype:1 of
msgid "Get weight's quantization dtype as the method from ``qconfig``."
msgstr "以类方法形式,从 ``qconfig`` 中获取权重的量化数据类型。"
#: megengine.module.qat.module.QATModule.set_qconfig:1 of
msgid ""
"Set quantization related configs with ``qconfig``, including observer and "
"fake_quant for weight and activation."
msgstr "使用 ``qconfig`` 更改量化相关配置。包含权重和激活值的 observer 和 fake_quant 。"
#: ../../source/api_zh/megengine.module.qat.rst:59
msgid "megengine.module.qat.quant\\_dequant"
msgstr "megengine.module.qat.quant\\_dequant"
#: megengine.module.qat.quant_dequant.DequantStub:1 of
msgid ""
"Bases: :class:`megengine.module.quant_dequant.DequantStub`, "
":class:`megengine.module.qat.module.QATModule`"
msgstr ""
"基类: :class:`megengine.module.quant_dequant.DequantStub`, "
":class:`megengine.module.qat.module.QATModule`"
#: megengine.module.qat.quant_dequant.DequantStub:1 of
msgid ""
"A helper QATModule simply return input, but will de-quantize input after "
"converted to :class:`~.QuantizedModule`."
msgstr "仅返回输入的辅助QATModule, 但在转换为 :class:`~.QuantizedModule` 之后,会对输入进行反量化。"
#: megengine.module.qat.quant_dequant.QuantStub:1 of
msgid ""
"Bases: :class:`megengine.module.quant_dequant.QuantStub`, "
":class:`megengine.module.qat.module.QATModule`"
msgstr ""
"基类: :class:`megengine.module.quant_dequant.QuantStub`, "
":class:`megengine.module.qat.module.QATModule`"
#: megengine.module.qat.quant_dequant.QuantStub:1 of
msgid ""
"A helper QATModule simply return input, but will quantize input after "
"converted to :class:`~.QuantizedModule`."
msgstr "仅返回输入的辅助QATModule, 但在转换为 :class:`~.QuantizedModule` 之后,会对输入进行量化。"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/api_zh/megengine.module.quantized.rst:2
msgid "megengine.module.quantized package"
msgstr "megengine.module.quantized package"
#: ../../source/api_zh/megengine.module.quantized.rst:11
msgid "megengine.module.quantized.concat"
msgstr "megengine.module.quantized.concat"
#: megengine.module.quantized.concat.Concat:1
#: megengine.module.quantized.elemwise.Elemwise:1
#: megengine.module.quantized.linear.Linear:1
#: megengine.module.quantized.quant_dequant.DequantStub:1
#: megengine.module.quantized.quant_dequant.QuantStub:1 of
msgid "Bases: :class:`megengine.module.quantized.module.QuantizedModule`"
msgstr "基类: :class:`megengine.module.quantized.module.QuantizedModule`"
#: megengine.module.quantized.concat.Concat:1 of
msgid "A :class:`~.QuantizedModule` to do quantized concat, inference only."
msgstr "量化(quantized)版本concat的 :class:`~.QuantizedModule` ,仅用于推理阶段。"
#: megengine.module.quantized.concat.Concat.from_qat_module:1
#: megengine.module.quantized.conv.Conv2d.from_qat_module:1
#: megengine.module.quantized.elemwise.Elemwise.from_qat_module:1
#: megengine.module.quantized.linear.Linear.from_qat_module:1
#: megengine.module.quantized.module.QuantizedModule.from_qat_module:1
#: megengine.module.quantized.quant_dequant.DequantStub.from_qat_module:1
#: megengine.module.quantized.quant_dequant.QuantStub.from_qat_module:1 of
msgid ""
"return a :class:`~.QuantizedModule` instance converted from a "
":class:`~.QATModule` instance."
msgstr "返回从 :class:`~.QATModule` 实例转换而来的 :class:`~.QuantizedModule` 实例。"
#: ../../source/api_zh/megengine.module.quantized.rst:19
msgid "megengine.module.quantized.conv"
msgstr "megengine.module.quantized.conv"
#: megengine.module.quantized.conv.Conv2d:1 of
msgid ""
"Bases: :class:`megengine.module.conv.Conv2d`, "
":class:`megengine.module.quantized.module.QuantizedModule`"
msgstr ""
"基类: :class:`megengine.module.conv.Conv2d`, "
":class:`megengine.module.quantized.module.QuantizedModule`"
#: megengine.module.quantized.conv.Conv2d:1 of
msgid "quantized version of :class:`~.qat.conv.Conv2d`."
msgstr "量化(quantized)版本 :class:`~.qat.conv.Conv2d` 。"
#: megengine.module.quantized.conv.ConvRelu2d:1 of
msgid "Bases: :class:`megengine.module.quantized.conv.Conv2d`"
msgstr "基类: :class:`megengine.module.quantized.conv.Conv2d`"
#: megengine.module.quantized.conv.ConvRelu2d:1 of
msgid "quantized version of :class:`~.qat.conv.ConvRelu2d`."
msgstr "量化(quantized)版本 :class:`~.qat.conv.ConvRelu2d` 。"
#: ../../source/api_zh/megengine.module.quantized.rst:27
msgid "megengine.module.quantized.conv\\_bn"
msgstr "megengine.module.quantized.conv\\_bn"
#: megengine.module.quantized.conv_bn.ConvBn2d:1
#: megengine.module.quantized.conv_bn.ConvBnRelu2d:1 of
msgid "Bases: :class:`megengine.module.quantized.conv_bn._ConvBnActivation2d`"
msgstr "基类: :class:`megengine.module.quantized.conv_bn._ConvBnActivation2d`"
#: megengine.module.quantized.conv_bn.ConvBn2d:1 of
msgid "quantized version of :class:`~.qat.conv_bn.ConvBn2d`."
msgstr "量化(quantized)版本 :class:`~.qat.conv_bn.ConvBn2d` 。"
#: megengine.module.quantized.conv_bn.ConvBnRelu2d:1 of
msgid "quantized version of :class:`~.qat.conv_bn.ConvBnRelu2d`."
msgstr "量化(quantized)版本 :class:`~.qat.conv_bn.ConvBnRelu2d` 。"
#: ../../source/api_zh/megengine.module.quantized.rst:35
msgid "megengine.module.quantized.elemwise"
msgstr "megengine.module.quantized.elemwise"
#: megengine.module.quantized.elemwise.Elemwise:1 of
msgid "quantized version of :class:`~.qat.elemwise.Elemwise`."
msgstr "量化(quantized)版本 :class:`~.qat.elemwise.Elemwise` 。"
#: ../../source/api_zh/megengine.module.quantized.rst:43
msgid "megengine.module.quantized.linear"
msgstr "megengine.module.quantized.linear"
#: megengine.module.quantized.linear.Linear:1 of
msgid "quantized version of :class:`~.qat.linear.Linear`."
msgstr "量化(quantized)版本 :class:`~.qat.linear.Linear` 。"
#: ../../source/api_zh/megengine.module.quantized.rst:51
msgid "megengine.module.quantized.module"
msgstr "megengine.module.quantized.module"
#: megengine.module.quantized.module.QuantizedModule:1 of
msgid "Bases: :class:`megengine.module.module.Module`"
msgstr "基类: :class:`megengine.module.module.Module`"
#: megengine.module.quantized.module.QuantizedModule:1 of
msgid ""
"Base class of quantized Module, which should be converted from QATModule and"
" not support traning."
msgstr "量化(quantized)版本Module的基类。应从QATModule转换而来,不支持训练。"
#: ../../source/api_zh/megengine.module.quantized.rst:59
msgid "megengine.module.quantized.quant\\_dequant"
msgstr "megengine.module.quantized.quant\\_dequant"
#: megengine.module.quantized.quant_dequant.DequantStub:1 of
msgid ""
"quantized version of :class:`~.qat.quant_dequant.DequantStub`, will restore "
"quantized input to float32 dtype."
msgstr "量化(quantized)版本 :class:`~.qat.quant_dequant.DequantStub` , 可将量化后的输入重置为float32类型。"
#: megengine.module.quantized.quant_dequant.QuantStub:1 of
msgid ""
"quantized version of :class:`~.qat.quant_dequant.QuantStub`, will convert "
"input to quantized dtype."
msgstr "量化(quantized)版本 :class:`~.qat.quant_dequant.QuantStub` ,可将输入转化为量化数据类型。"
#~ msgid ""
#~ "Replace :class:`~.module.QATModule`'s ``to_quantized`` method. implemented "
#~ "here to avoid circular import."
#~ msgstr ""
#~ "将 :class:`~.module.QATModule` 的 ``to_quantized`` 方法替换掉。为了避免循环import而实现在这里。"
#~ msgid "quantized module for elemwise operator, inference only."
#~ msgstr "elemwise算子的量化Module,仅用于推理阶段。"
#~ msgid "Parameters"
#~ msgstr "参数"
#~ msgid ""
#~ "the elemwise method, supported string refer to "
#~ ":class:`~.module.elemwise.Elemwise`. it will do quantized operator with "
#~ "specified output quantized dtype."
#~ msgstr ""
#~ "elemwise方法,其支持的字符串可以参考 :class:`~.module.elemwise.Elemwise` "
#~ "。该Module将按照输出指定的量化数据类型做相应的量化操作。"
#~ msgid "A helper de-quantize operation and inference only."
#~ msgstr "辅助反量化操作,仅用于推理阶段。"
#~ msgid "A helper quantize operation on input and inference only."
#~ msgstr "对输入数据进行量化的辅助函数,仅用于推理阶段。"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/api_zh/megengine.optimizer.rst:2
msgid "megengine.optimizer package"
msgstr "megengine.optimizer package"
#: ../../source/api_zh/megengine.optimizer.rst:11
msgid "megengine.optimizer.adadelta"
msgstr "megengine.optimizer.adadelta"
#: megengine.optimizer.adadelta.Adadelta:1
#: megengine.optimizer.adagrad.Adagrad:1 megengine.optimizer.adam.Adam:1
#: megengine.optimizer.sgd.SGD:1 of
msgid "Bases: :class:`megengine.optimizer.optimizer.Optimizer`"
msgstr "基类: :class:`megengine.optimizer.optimizer.Optimizer`"
#: megengine.optimizer.adadelta.Adadelta:1 of
msgid "Implements Adadelta algorithm."
msgstr "实现Adadelta算法。"
#: megengine.optimizer.adadelta.Adadelta:3 of
msgid ""
"It has been proposed in `\"ADADELTA: An Adaptive Learning Rate Method\" "
"<https://arxiv.org/abs/1212.5701>`_."
msgstr ""
"这已经在 `\"ADADELTA: An Adaptive Learning Rate Method\" "
"<https://arxiv.org/abs/1212.5701>` _ 中被提出。"
#: megengine.optimizer.adadelta.Adadelta megengine.optimizer.adagrad.Adagrad
#: megengine.optimizer.adam.Adam megengine.optimizer.lr_scheduler.LRScheduler
#: megengine.optimizer.lr_scheduler.LRScheduler.load_state_dict
#: megengine.optimizer.multi_step_lr.MultiStepLR
#: megengine.optimizer.multi_step_lr.MultiStepLR.load_state_dict
#: megengine.optimizer.optimizer.Optimizer
#: megengine.optimizer.optimizer.Optimizer.add_param_group
#: megengine.optimizer.optimizer.Optimizer.backward
#: megengine.optimizer.optimizer.Optimizer.load_state_dict
#: megengine.optimizer.sgd.SGD of
msgid "Parameters"
msgstr "参数"
#: megengine.optimizer.adadelta.Adadelta:6
#: megengine.optimizer.adagrad.Adagrad:7 megengine.optimizer.adam.Adam:4
#: megengine.optimizer.sgd.SGD:7 of
msgid "iterable of parameters to optimize or dicts defining parameter groups."
msgstr "可迭代对象,可以是一组待优化的参数,或定义几组参数的dict类型。"
#: megengine.optimizer.adadelta.Adadelta:9 of
msgid ""
"coefficient that scale delta before it is applied to the parameters "
"(default: 1.0)."
msgstr "在将delta应用于参数之前缩放比例系数(默认: 1.0)。"
#: megengine.optimizer.adadelta.Adadelta:12 of
msgid ""
"coefficient used for computing a running average of squared gradients "
"(default: 0.9)."
msgstr "用于计算平方梯度的移动平均值(running average)的系数(默认: 0.9)。"
#: megengine.optimizer.adadelta.Adadelta:15 of
msgid ""
"term added to the denominator to improve numerical stability (default: "
"1e-6)."
msgstr "加到分母上以提高数值稳定性的值 (默认: 1e-6)。"
#: megengine.optimizer.adadelta.Adadelta:18
#: megengine.optimizer.adagrad.Adagrad:18 of
msgid "weight decay (L2 penalty) (default: 0)."
msgstr "权重衰减(L2 penalty)(默认: 0)。"
#: ../../source/api_zh/megengine.optimizer.rst:19
msgid "megengine.optimizer.adagrad"
msgstr "megengine.optimizer.adagrad"
#: megengine.optimizer.adagrad.Adagrad:1 of
msgid "Implements Adagrad algorithm."
msgstr "实现Adagrad算法。"
#: megengine.optimizer.adagrad.Adagrad:3 of
msgid ""
"It has been proposed in `\"Adaptive Subgradient Methods for Online Learning "
"and Stochastic Optimization\" <http://jmlr.org/papers/v12/duchi11a.html>`_."
msgstr ""
"这已经在 `\"Adaptive Subgradient Methods for Online Learning and Stochastic "
"Optimization\" <http://jmlr.org/papers/v12/duchi11a.html>` _ 中被提出。"
#: megengine.optimizer.adagrad.Adagrad:10 of
msgid ""
"coefficient that scale delta before it is applied to the parameters "
"(default: 1e-2)."
msgstr "在将delta应用于参数之前缩放比例系数(默认: 1e-2)。"
#: megengine.optimizer.adagrad.Adagrad:13 of
msgid "learning rate decay (default: 0)"
msgstr "学习率衰减的乘数因子。(默认: 0)"
#: megengine.optimizer.adagrad.Adagrad:15 of
msgid ""
"term added to the denominator to improve numerical stability (default: "
"1e-10)."
msgstr "加到分母上以提高数值稳定性的值(默认: 1e-10)。"
#: ../../source/api_zh/megengine.optimizer.rst:27
msgid "megengine.optimizer.adam"
msgstr "megengine.optimizer.adam"
#: megengine.optimizer.adam.Adam:1 of
msgid ""
"Implements Adam algorithm proposed in `\"Adam: A Method for Stochastic "
"Optimization\" <https://arxiv.org/abs/1412.6980>`_."
msgstr ""
"实现 `\"Adam: A Method for Stochastic Optimization\" "
"<https://arxiv.org/abs/1412.6980>`_ 中提出的Adam算法。"
#: megengine.optimizer.adam.Adam:7 megengine.optimizer.sgd.SGD:10 of
msgid "learning rate."
msgstr "学习率(learning rate)。"
#: megengine.optimizer.adam.Adam:9 of
msgid ""
"coefficients used for computing running averages of gradient and its square."
" Default: (0.9, 0.999)"
msgstr "一组系数,用于计算运行时梯度的平均值及其平方值。默认:(0.9,0.999)"
#: megengine.optimizer.adam.Adam:12 of
msgid ""
"term added to the denominator to improve numerical stability Default: 1e-8"
msgstr "加到分母上以提高数值稳定性的值,默认:1e-8"
#: megengine.optimizer.adam.Adam:15 of
msgid "weight decay (L2 penalty). Default: 0"
msgstr "权重衰减(L2惩罚)。默认:0"
#: ../../source/api_zh/megengine.optimizer.rst:35
msgid "megengine.optimizer.internal"
msgstr "megengine.optimizer.internal"
#: megengine.optimizer.internal.add_update_fastpath:1 of
msgid ""
"a fast-path ONLY used to update parameters in optimizer, since it would "
"bypass computing graph and launch dnn/add_update kernel directly, it is more"
" efficient than functional/add_update."
msgstr ""
"仅用于在优化器中更新参数的快捷路径。因为它可以绕过计算图直接启动dnn/add_update kernel, "
"启用此项比functional/add_update更加高效。"
#: ../../source/api_zh/megengine.optimizer.rst:43
msgid "megengine.optimizer.lr\\_scheduler"
msgstr "megengine.optimizer.lr\\_scheduler"
#: megengine.optimizer.lr_scheduler.LRScheduler:1
#: megengine.optimizer.optimizer.Optimizer:1 of
msgid "Bases: :class:`object`"
msgstr "基类: :class:`object`"
#: megengine.optimizer.lr_scheduler.LRScheduler:1 of
msgid "Base class for all learning rate based schedulers."
msgstr "所有学习率调度器的基类。"
#: megengine.optimizer.lr_scheduler.LRScheduler:4
#: megengine.optimizer.multi_step_lr.MultiStepLR:5 of
msgid "Wrapped optimizer."
msgstr "包装后的优化器。"
#: megengine.optimizer.lr_scheduler.LRScheduler:6 of
msgid "The index of current epoch. Default: -1"
msgstr "当前epoch的索引。默认:-1"
#: megengine.optimizer.lr_scheduler.LRScheduler.get_lr:1
#: megengine.optimizer.multi_step_lr.MultiStepLR.get_lr:1 of
msgid "Compute current learning rate for the scheduler."
msgstr "计算当前调度器(scheduler)的学习率。"
#: megengine.optimizer.lr_scheduler.LRScheduler.load_state_dict:1
#: megengine.optimizer.multi_step_lr.MultiStepLR.load_state_dict:1 of
msgid "Loads the schedulers state."
msgstr "加载调度器(scheduler)的状态"
#: megengine.optimizer.lr_scheduler.LRScheduler.load_state_dict:3
#: megengine.optimizer.multi_step_lr.MultiStepLR.load_state_dict:3 of
msgid "scheduler state."
msgstr "调度器(scheduler)的状态。"
#: megengine.optimizer.lr_scheduler.LRScheduler.state_dict:1
#: megengine.optimizer.multi_step_lr.MultiStepLR.state_dict:1 of
msgid ""
"Returns the state of the scheduler as a :class:`dict`. It contains an entry "
"for every variable in self.__dict__ which is not the optimizer."
msgstr ""
"以 :class:`dict` 的形式返回调度器(非优化器)的状态。对于调度器的 self.__dict__ 中的每一个变量,都会在其中有对应的条目。"
#: ../../source/api_zh/megengine.optimizer.rst:51
msgid "megengine.optimizer.multi\\_step\\_lr"
msgstr "megengine.optimizer.multi\\_step\\_lr"
#: megengine.optimizer.multi_step_lr.MultiStepLR:1 of
msgid "Bases: :class:`megengine.optimizer.lr_scheduler.LRScheduler`"
msgstr "基类: :class:`megengine.optimizer.lr_scheduler.LRScheduler`"
#: megengine.optimizer.multi_step_lr.MultiStepLR:2 of
msgid "Decays the learning rate of each parameter group by gamma once the"
msgstr "以gamma为倍率阶梯式衰减各参数组的学习率"
#: megengine.optimizer.multi_step_lr.MultiStepLR:2 of
msgid "number of epoch reaches one of the milestones."
msgstr "当epoch的数目达到milestones之一时,才会执行。"
#: megengine.optimizer.multi_step_lr.MultiStepLR:6 of
msgid "List of epoch indices. Must be increasing."
msgstr "epoch索引列表。必须按递增排序。"
#: megengine.optimizer.multi_step_lr.MultiStepLR:7 of
msgid "Multiplicative factor of learning rate decay. Default: 0.1."
msgstr "学习率衰减的乘数因子。默认:0.1。"
#: megengine.optimizer.multi_step_lr.MultiStepLR:9 of
msgid "The index of current epoch. Default: -1."
msgstr "当前epoch的索引。默认:-1。"
#: ../../source/api_zh/megengine.optimizer.rst:59
msgid "megengine.optimizer.optimizer"
msgstr "megengine.optimizer.optimizer"
#: megengine.optimizer.optimizer.Optimizer:1 of
msgid "Base class for all optimizers."
msgstr "所有优化器的基类。"
#: megengine.optimizer.optimizer.Optimizer:4 of
msgid "specifies what Tensors should be optimized."
msgstr "指定应该优化哪些张量。"
#: megengine.optimizer.optimizer.Optimizer:6 of
msgid ""
"a dict of default parameters of Optimizer, like learning rate or momentum."
msgstr "一个含有优化器默认参数的dict,如含有学习率(learning rate)和动量(momentum)。"
#: megengine.optimizer.optimizer.Optimizer:8 of
msgid ""
"interval time between two broadcast of distributed training. Default: 500"
msgstr "分布式训练中每两次参数广播的间隔时间。默认:500"
#: megengine.optimizer.optimizer.Optimizer.add_param_group:1 of
msgid ""
"Add a param group to ``param_groups`` of the "
":class:`~megengine.optim.optimizer.Optimizer`."
msgstr ""
"向 :class:`~megengine.optim.optimizer.Optimizer` 的 ``param_groups`` "
"中添加一组参数。"
#: megengine.optimizer.optimizer.Optimizer.add_param_group:3 of
msgid ""
"This can be useful when fine tuning a pre-trained network as frozen layers "
"can be made trainable and added to the "
":class:`~megengine.optim.optimizer.Optimizer` as training progresses."
msgstr ""
"该方法可以在微调(fine-tuning)预训练网络时发挥作用,在训练过程中,冻结层通过此方法加入到 "
":class:`~megengine.optim.optimizer.Optimizer` 中变为可训练层 。"
#: megengine.optimizer.optimizer.Optimizer.add_param_group:7 of
msgid "specifies what tensors should be optimized along with group."
msgstr "指定了应与参数组一起进行优化的张量。"
#: megengine.optimizer.optimizer.Optimizer.backward:1 of
msgid "Computes the back-propagation of the network given loss."
msgstr "由给定网络损失,计算反向传播。"
#: megengine.optimizer.optimizer.Optimizer.backward:4 of
msgid "The obtained loss tensor"
msgstr "得到的损失张量"
#: megengine.optimizer.optimizer.Optimizer.load_state_dict:1 of
msgid "Loads the optimizer state."
msgstr "加载优化器状态。"
#: megengine.optimizer.optimizer.Optimizer.load_state_dict:4 of
msgid ""
"optimizer state. Should be an object returned from a call to "
":meth:`state_dict`."
msgstr "优化器状态。应为调用 :meth:`state_dict` 返回的对象。"
#: megengine.optimizer.optimizer.Optimizer.state_dict:1 of
msgid "Export the optimizer state."
msgstr "导出优化器状态。"
#: megengine.optimizer.optimizer.Optimizer.state_dict of
msgid "Return type"
msgstr "返回类型"
#: megengine.optimizer.optimizer.Optimizer.state_dict:3 of
msgid ":py:class:`~typing.Dict`"
msgstr ":py:class:`~typing.Dict`"
#: megengine.optimizer.optimizer.Optimizer.state_dict of
msgid "Returns"
msgstr "返回"
#: megengine.optimizer.optimizer.Optimizer.state_dict:4 of
msgid "optimizer state. Can be loaded by :meth:`load_state_dict`."
msgstr "优化器状态。可通过 :meth:`load_state_dict` 来加载。"
#: megengine.optimizer.optimizer.Optimizer.step:1 of
msgid "Performs a single optimization step."
msgstr "执行单一优化步骤。"
#: megengine.optimizer.optimizer.Optimizer.zero_grad:1 of
msgid "Reset the grad to zeros."
msgstr "重置梯度为零。"
#: ../../source/api_zh/megengine.optimizer.rst:67
msgid "megengine.optimizer.sgd"
msgstr "megengine.optimizer.sgd"
#: megengine.optimizer.sgd.SGD:1 of
msgid "Implements stochastic gradient descent."
msgstr "实现随机梯度下降。"
#: megengine.optimizer.sgd.SGD:3 of
#, python-format
msgid ""
"Nesterov momentum is based on the formula from `\"On the importance of "
"initialization and momentum in deep learning\" "
"<http://www.cs.toronto.edu/%7Ehinton/absps/momentum.pdf>`_ ."
msgstr ""
"Nesterov momentum的实现是基于 `\"On the importance of initialization and momentum "
"in deep learning\" "
"<http://www.cs.toronto.edu/%7Ehinton/absps/momentum.pdf>`_ 中的公式。"
#: megengine.optimizer.sgd.SGD:12 of
msgid "momentum factor. Default: 0.0"
msgstr "momentum因子。默认:0.0"
#: megengine.optimizer.sgd.SGD:14 of
msgid "weight decay (L2 penalty). Default: 0.0"
msgstr "权重衰减(L2范数惩罚)。默认:0.0"
#~ msgid "Implements Adam algorithm."
#~ msgstr "实现Adam算法。"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-17 15:24+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/autogen/megengine.rst:2
msgid "megengine package"
msgstr "megengine package"
#: ../../source/autogen/megengine.rst:11
msgid "megengine.logger"
msgstr "megengine.logger"
#: megengine.logger.MegBrainLogFormatter:1 of
msgid "Bases: :class:`megengine.logger.MegEngineLogFormatter`"
msgstr "基类: :class:`megengine.logger.MegEngineLogFormatter`"
#: megengine.logger.MegEngineLogFormatter:1 of
msgid "Bases: :class:`logging.Formatter`"
msgstr "基类: :class:`logging.Formatter`"
#: megengine.logger.MegEngineLogFormatter.format:1 of
msgid "Format the specified record as text."
msgstr "将指定的记录格式化为文本。"
#: megengine.logger.MegEngineLogFormatter.format:3 of
msgid ""
"The record's attribute dictionary is used as the operand to a string "
"formatting operation which yields the returned string. Before formatting the"
" dictionary, a couple of preparatory steps are carried out. The message "
"attribute of the record is computed using LogRecord.getMessage(). If the "
"formatting string uses the time (as determined by a call to usesTime(), "
"formatTime() is called to format the event time. If there is exception "
"information, it is formatted using formatException() and appended to the "
"message."
msgstr ""
"record的属性字典用作字符串格式化操作中的操作数,该操作产生返回的字符串。在格式化字典之前,需要执行几个准备步骤。使用LogRecord.getMessage()计算record的消息属性。如果格式化字符串使用时间(通过调用usesTime()来确定),则将调用formatTime()来格式化事件时间。如果存在异常信息,则使用formatException()将其格式化并附加到消息中。"
#: megengine.logger.enable_debug_log:1 of
msgid "Sets logging level to debug for all components."
msgstr "设置日志记录级别(logging level)以调试所有组件。"
#: megengine.logger.get_logger:1 of
msgid "Gets megengine logger with given name."
msgstr "按照给定名称获取megengine日志。"
#: megengine.logger.replace_mgb_log_level:1 of
msgid "Replaces megbrain log level in a block and restore after exiting."
msgstr "替换block中megbrain的日志级别,并在退出后恢复。"
#: megengine.logger.replace_mgb_log_level megengine.logger.set_log_file
#: megengine.logger.set_log_level megengine.logger.set_mgb_log_level of
msgid "Parameters"
msgstr "参数"
#: megengine.logger.replace_mgb_log_level:4
#: megengine.logger.set_mgb_log_level:4 of
msgid "new log level"
msgstr "新的日志级别"
#: megengine.logger.set_log_file:1 of
msgid "Sets log output file."
msgstr "设置日志的输出文件。"
#: megengine.logger.set_log_file:4 of
msgid ""
"file-like object that supports write and flush, or string for the filename"
msgstr "支持写操作和flush操作的文件对象,或文件名的字符串"
#: megengine.logger.set_log_file:7 of
msgid "specify the mode to open log file if *fout* is a string"
msgstr "如果 *fout* 是一个字符串,则为指定打开日志文件的模式"
#: megengine.logger.set_log_level:1 of
msgid "Sets default logging level."
msgstr "设置默认的日志记录级别。"
#: megengine.logger.set_log_level:4 of
msgid "loggin level given by python :mod:`logging` module"
msgstr "python的 :mod:`logging` 模块给出的日志记录级别"
#: megengine.logger.set_log_level:5 of
msgid "whether to update existing loggers"
msgstr "是否更新现有的日志记录"
#: megengine.logger.set_mgb_log_level:1 of
msgid "Sets megbrain log level"
msgstr "设置megbrain的日志级别"
#: megengine.logger.set_mgb_log_level of
msgid "Returns"
msgstr "返回值"
#: megengine.logger.set_mgb_log_level:5 of
msgid "original log level"
msgstr "初始的日志级别"
#: ../../source/autogen/megengine.rst:19
msgid "megengine.version"
msgstr "megengine.version"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2020, Megvii
# This file is distributed under the same license as the MegEngine Documents
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: MegEngine Documents\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-06-15 09:28-0700\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n"
#: ../../source/api_zh/megengine.test.rst:2
msgid "megengine.test package"
msgstr "megengine.test package"
#: megengine.test.assertTensorClose of
msgid "Parameters"
msgstr "参数"
#: megengine.test.assertTensorClose:2 of
msgid ""
"whether to allow :attr:`v0` and :attr:`v1` to contain inf and nan values."
msgstr "是否允许 :attr:`v0` 和 :attr:`v1` 包含inf和NaN值。"
#: megengine.test.assertTensorClose:4 of
msgid "relative error"
msgstr "相对误差"
#~ msgid "max_err: relative error"
#~ msgstr "max_err:相对误差"
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册